All pages
Powered by GitBook
Couldn't generate the PDF for 345 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Connecting

Learn how to connect to MariaDB Server. This section details various methods and tools for establishing secure and efficient connections to your database from different applications and environments.

Backup & Restore

Learn to back up and restore MariaDB Server databases. This section covers essential strategies and tools to ensure data safety and quick recovery from potential data loss.

Basics

Grasp the basics of using MariaDB Server. This section introduces fundamental concepts, common SQL commands, and essential operations to get you started with your database.

Data Handling

Learn effective data handling in MariaDB Server. This section covers data types, storage engines, data manipulation, and best practices for managing your information efficiently.

Server Usage

Learn how to effectively use MariaDB Server. This section covers SQL statements, built-in functions, client utilities, and best practices for daily database operations.

BasicsConnectingData HandlingBackup & RestoreTablesPartitioning TablesStored RoutinesStorage EnginesTriggers & EventsUser-Defined FunctionsViews

mariadb-backup

Get an overview of MariaDB Backup. This section introduces the hot physical backup tool, explaining its capabilities for efficient and consistent backups of your MariaDB Server.

mariadb-backup Overviewmariadb-backup OptionsFull Backup and Restore with mariadb-backupPartial Backup and Restore with mariadb-backupRestoring Individual Tables and Partitions with mariadb-backupSetting up a Replica with mariadb-backupFiles Backed Up By mariadb-backupFiles Created by mariadb-backupUsing Encryption and Compression Tools With mariadb-backupHow mariadb-backup Worksmariadb-backup and BACKUP STAGE CommandsIndividual Database Restores with mariadb-backup from Full Backup

Partitioning Types

Explore different partitioning types for MariaDB Server tables. Understand range, list, hash, and key partitioning to optimize data management and improve query performance.

Stored Procedures

Master stored procedures in MariaDB Server. This section covers creating, executing, and managing these powerful routines to encapsulate complex logic and improve application performance.

Partitioning Tables

Optimize large tables in MariaDB Server with partitioning. Learn how to divide tables into smaller, manageable parts for improved performance, easier maintenance, and scalability.

ARIA

Learn about the Aria storage engine in MariaDB Server. Understand its features, advantages, and use cases, particularly for crash-safe operations and transactional workloads.

Using CONNECT

The CONNECT storage engine has been deprecated.

Stored Routines

Automate tasks in MariaDB Server with stored routines. Learn to create and manage stored procedures and functions for enhanced database efficiency and code reusability.

A MariaDB Primer

A beginner-friendly primer on using the mariadb command-line client to log in, create databases, and execute basic SQL commands.

This primer is designed to teach you the basics of getting information into and out of an existing MariaDB database using the mariadb command-line client program. It's not a complete reference and will not touch on any advanced topics. It is just a quick jump-start into using MariaDB.

Logging into MariaDB

Log into your MariaDB server from the command-line like so:

Replace user_name with your database username. Replace ip_address with the host name or address of your server. If you're accessing MariaDB from the same server you're logged into, then don't include -h and the ip_address. Replace db_name with the name of the database you want to access (such as test, which sometimes comes already created for testing purposes - note that Windows does not create this database, and some setups may also have removed the test database by running , in which case you can leave the db_name out).

When prompted to enter your password, enter it. If your login is successful you should see something that looks similar to this:

This is where you will enter in all of your SQL statements. More about those later. For now, let's look at the components of the prompt: The "MariaDB" part means you that you are connected to a MariaDB database server. The word between the brackets is the name of your default database, the test database in this example.

The Basics of a Database

To make changes to a database or to retrieve data, you will need to enter an SQL statement. SQL stands for Structured Query Language. An SQL statement that requests data is called a query. Databases store information in tables. They're are similar to spreadsheets, but much more efficient at managing data.

Note that the test database may not contain any data yet. If you want to follow along with the primer, copy and paste the following into the client. This will create the tables we will use and add some data to them. Don't worry about understanding them yet; we'll get to that later.

Notice the semi-colons used above. The client lets you enter very complex SQL statements over multiple lines. It won't send an SQL statement until you type a semi-colon and hit [Enter].

Let's look at what you've done so far. Enter the following:

Notice that this displays a list of the tables in the database. If you didn't already have tables in your test database, your results should look the same as above. Let's now enter the following to get information about one of these tables:

The main bit of information of interest to us is the Field column. The other columns provide useful information about the structure and type of data in the database, but the Field column gives us the names, which is needed to retrieve data from the table.

Let's retrieve data from the books table. We'll do so by executing a statement like so:

This SQL statement or query asks the database to show us all of the data in the books table. The wildcard ('*') character indicates to select all columns.

Inserting Data

Suppose now that we want to add another book to this table. We'll add the book, Lair of Bones. To insert data into a table, you would use the statement. To insert information on a book, we would enter something like this:

Notice that we put a list of columns in parentheses after the name of the table, then we enter the keyword VALUES followed by a list of values in parentheses--in the same order as the columns were listed. We could put the columns in a different order, as long as the values are in the same order as we list the columns. Notice the message that was returned indicates that the execution of the SQL statement went fine and one row was entered.

Execute the following SQL statement again and see what results are returned:

You should see the data you just entered on the last row of the results. In looking at the data for the other books, suppose we notice that the title of the seventh book is spelled wrong. It should be spelled The Hobbit, not The Hobbbit. We will need to update the data for that row.

Modifying Data

To change data in a table, you will use the statement. Let's change the spelling of the book mentioned above. To do this, enter the following:

Notice the syntax of this SQL statement. The SET clause is where you list the columns and the values to set them. The WHERE clause says that you want to update only rows in which the BookID column has a value of 7, of which there are only one. You can see from the message it returned that one row matched the WHERE clause and one row was changed. There are no warnings because everything went fine. Execute the from earlier to see that the data changed.

As you can see, using MariaDB isn't very difficult. You just have to understand the syntax of SQL since it doesn't allow for typing mistakes or things in the wrong order or other deviations.

See Also

Backup Optimization

Discover techniques to optimize your backup processes, including multithreading, incremental backups, and leveraging storage snapshots.

Overview

Backup and restore implementations can help overcome specific technical challenges that would otherwise pose a barrier to meeting business requirements.

Each of these practices represents a trade-off. Understand risks before implementing any of these practices.

Forming a Backup Strategy

Learn how to design a robust backup strategy tailored to your business needs, balancing recovery time objectives and data retention policies.

Overview

The strategy applied when implementing data backups depends on business needs.

Data backup strategy depends on business needs. Business needs can be evaluated by performing a data inventory, determining data , considering the , and considering . Also critical is a and .

Partitions Metadata

Understand how to retrieve metadata about partitions using the INFORMATION_SCHEMA.PARTITIONS table to monitor row counts and storage usage.

The table in the database contains information about partitions.

The statement contains a Create_options column, that contains the string 'partitioned' for partitioned tables.

The statement returns the statement that can be used to re-create a table, including the partitions definition.

This page is licensed: CC BY-SA / Gnu FDL

Tables

Manage tables in MariaDB Server. This section details creating, altering, and dropping tables, along with understanding data types and storage engines for optimal database design.

LINEAR HASH Partitioning Type

Explore LINEAR HASH partitioning, a variation of HASH that uses a powers-of-two algorithm for faster partition management at the cost of distribution.

Syntax

Description

LINEAR HASH

mariadb -u user_name -p -h ip_address db_name
CREATE TABLE
Altering Tables in MariaDB
Indexes
Views
Copying Tables Between Databases and Servers

Storage Engines

Understand MariaDB Server's storage engines. Explore the features and use cases of InnoDB, Aria, MyISAM, and other engines to choose the best option for your specific data needs.

Stored Functions

Utilize stored functions in MariaDB Server. This section details creating, using, and managing user-defined functions to extend SQL capabilities and streamline data manipulation.

Scheduling of Restore Preparation

Technical challenge: restore time

Trade-off: increased ongoing overhead for backup processing

Backup data can be prepared for restore any time after it is produced and before it is used for restore. To expedite recovery, incremental backups can be pre-applied to the prior full backup to enable faster recovery. This may be done at the expense of recovery points, or at the expense of storage by maintaining copies of unmerged full and incremental backup directories.

Moving Restored Data

Technical challenge: disk space limitations

Trade-off: modification of backup directory contents

Suggested method for moving restored data is to use --copy-back as this method provides added safety. Where you might instead optimize for disk space savings, system resources, and time you may choose to instead use MariaDB Enterprise Backup's --move-back option. Speed benefits are only present when backup files are on the same disk partition as the destination data directory.

The --move-back option will result in the removal of all data files from the backup directory, so it is best to use this option only when you have an additional copy of your backup data in another location.

To restore from a backup by moving files, use the --move-back option:

Multithreading

Technical challenge:: CPU bottlenecks

Trade-off: Increased workload during backups

MariaDB Enterprise Backup is a multi-threaded application that by default runs on a single thread. In cases where you have a host with multiple cores available, you can specify the number of threads you want it to use for parallel data file transfers using the --parallel option:

Incrementing an Incremental Backup

Technical challenge: Backup resource overhead, backup duration

Trade-off: Increased restore complexity, restore process duration

Under normal operation an incremental backup is taken against an existing full backup. This allows you to further shorten the amount of time MariaDB Enterprise Backup locks MariaDB Enterprise Server while copying tablespaces. You can then apply the changes in the increment to the full backup with a --prepare operation at leisure, without disrupting database operations.

MariaDB Enterprise Backup also supports incrementing from an incremental backup. In this operation, the --incremental-basedir option points not to the full backup directory but rather to the previous incremental backup.

In preparing a backup to restore the data directory, apply the chain of incremental backups to the full backup in order. That is, first inc1/, then inc2/, and so on:

Continue to apply all the incremental changes until you have applied all available to the backup. Then restore as usual:

Start MariaDB Enterprise Server on the restored data directory.

Storage Snapshots

Technical challenge: Backup resource overhead, backup duration.

Trade-off: Limited to platforms with volume-level snapshots, may require crash recovery.

While MariaDB Enterprise Backups produces file-level backups, users on storage solutions may prefer to instead perform volume-level snapshots to minimize resource impact. This storage capability exists with some SAN, NAS, and volume manager platforms.

Snapshots occur point-in-time, so no preparation step is needed to ensure data is internally consistent. Snapshots occur while tablespaces are open, and a restored snapshot may need to undergo crash recovery.

Just as traditional full, incremental, and partial backups should be tested, so too should recovery from snapshots be tested on an ongoing basis.

Taking Snapshots

MariaDB Server includes advanced backup functionality to reduce the impact of backup operations:

  1. Connect with a client and issue a BACKUP STAGE START statement and then a BACKUP STAGE BLOCK_COMMIT statement.

  2. Take the snapshot.

  3. Issue a BACKUP STAGE END statement.

  4. Once the backup has been completed, remove all files which begin with the #sql prefix. These files are generated when ALTER TABLE occurs during a staged backup.

  5. Retrieve, copy, or store the snapshot as is typical for your storage platform and as per business requirements to make the backup durable. This may require mounting the snapshot in some manner.

It is recommended to briefly prevent writes while snapshotting. Specific commands vary depending on storage platform, business requirements, and setup, but a general approach is to:

  1. Connect with a client and issue a FLUSH TABLES WITH READ LOCK statement, leaving the client connected.

  2. Take the snapshot.

  3. Issue an UNLOCK TABLES statement, to remove the read lock.

This page is: Copyright © 2025 MariaDB. All rights reserved.

Data Inventory

Backup strategy requirements flow from the understanding you build by performing a data inventory. A data inventory is established by asking questions such as:

  1. What data is housed in the databases?

  2. What business purpose does this data serve?

  3. How long does the data needed to be retained in order to meet this business purpose?

  4. Are there any legal or regulatory requirements, which would limit the length of data retention?

Recovery Objectives

Data recovery requirements are often defined in terms of Recovery Point Objective (RPO) and Recovery Time Objective (RTO). RTO and RPO are considered in the context of the data identified in the data inventory.

Recovery Point Objective (RPO) defines the maximum amount of data a business is willing to lose. For example, a business can define a RPO of 24 hours.

Recovery Time Objective (RTO) defines how quickly a business needs to restore service in the event of a fault. For example, a business can define a RTO of 8 hours.

Backup strategy plays a substantial role in achieving RPO and RTO.

Achieving RPO

RPO depends on completion of backups, which provide a viable recovery point. Since RPO is measured at backup completion, not backup initiation, backup jobs must be scheduled at an interval smaller than the RPO.

Techniques for achieving RPO include:

  • Frequent incremental backups and less frequent full backups.

  • Performing backups in conjunction with replication and clustering to eliminate impact on production workloads, allowing a higher backup frequency.

  • Automated monitoring of backup status.

  • Automated testing of backups.

Achieving RTO

The RTO window typically commences at the point when a decision is made by the business to recover from backups, not at the start of an incident.

Techniques for achieving RTO include:

  • Leveraging information produced during incident response, which can reduce the set of data to restore from backups, or identify specific data validation requirements dependent on the nature of the incident.

  • Having fast access to backup data. Performance requirements of backup infrastructure should be understood for both backup and restoration workloads.

  • Using delayed replication, either within the same data center or to a different data center, can provide shorter path to recovery. This is particularly true when coupled with robust application monitoring, which allows intervention before the window of delay elapses.

  • Applying written and tested recovery procedures, which designate the systems and commands to be used during recovery.

  • Performing drills and exercises that periodically test recovery procedures to confirm readiness.

Replication Considerations

MariaDB Enterprise Server supports several implementations of replication, which accurately duplicates data from one Server to one or more other Servers. The use of a dedicated replica as a source for backups can minimize workload impact.

MariaDB Enterprise Cluster implements virtually synchronous replication, where each Server instance contains a replica of all of the data for the Cluster. Backups can be performed from any node in the Cluster.

Encryption Considerations

MariaDB Enterprise Server supports encryption on disk (data-at-rest encryption) and on the network (data-in-transit encryption).

MariaDB Enterprise Backup copies tablespaces from disk. When data-at-rest encryption is enabled, backups contain encrypted data.

MariaDB Enterprise Backup supports TLS encryption for communications with MariaDB Enterprise Server. To enable TLS encryption, set TLS options from the command-line or in the configuration file:

Backup Storage Considerations

How backups are stored can impact backup viability. Backup storage also presents separate risks. These risks need to be carefully considered:

  • Backup data should always be stored separately from the system being backed up, and separate from the system used for recovery.

  • Backup data should be subject to equal or more controls than data in production databases. For example, backup data should generally be encrypted even where a decision has bee made that a production database will not use data-at-rest encryption.

  • Business requirements may define a need for offsite storage of backups as a means of guaranteeing delivery on RPO. In these cases you should also consider onsite storage of backups as a means of guaranteeing delivery on RTO.

  • Retention requirements and the run-rate of new data production can aid in capacity planning.

Backup Testing

Testing has been identified as a critical success factor for the successful operation of data systems.

Backups should be tested. Recovery using backups and recovery procedures should be tested.

This page is: Copyright © 2025 MariaDB. All rights reserved.

recovery objectives
replication environment
encryption requirements
backup storage strategy
testing backup and recovery procedures
partitioning is a form of
, similar to
, in which the server takes care of the partition in which to place the data, ensuring a relatively even distribution among the partitions.

LINEAR HASH partitioning makes use of a powers-of-two algorithm, while HASH partitioning uses the modulus of the hashing function's value. Adding, dropping, merging and splitting partitions is much faster than with the HASH partitioning type, however, data is less likely to be evenly distributed over the partitions.

Example

This page is licensed: CC BY-SA / Gnu FDL

PARTITION BY LINEAR HASH (partitioning_expression)
[PARTITIONS(number_of_partitions)]
CREATE OR REPLACE TABLE t1 (c1 INT, c2 DATETIME) 
  PARTITION BY LINEAR HASH(TO_DAYS(c2)) 
  PARTITIONS 5;
partitioning
HASH partitioning
mariadb-secure-installation
mariadb
mariadb
SELECT
INSERT
UPDATE
SELECT
MariaDB Basics
PARTITIONS
INFORMATION_SCHEMA
SHOW TABLE STATUS
SHOW CREATE TABLE
CREATE TABLE

Fixing Connection Issues

Identify and resolve common connection problems, including server status checks, authentication errors, and network configuration.

If you are completely new to MariaDB and relational databases, you may want to start with the MariaDB Primer. Also, make sure you understand the connection parameters discussed in the Connecting to MariaDB article.

There are a number of common problems that can occur when connecting to MariaDB.

Server Not Running in Specified Location

If the error you get is something like:

or

the server is either not running, or not running on the specified port, socket or pipe. Make sure you are using the correct host, port, pipe, socket and protocol options, or alternatively, see , or .

The socket file can be in a non-standard path. In this case, the socket option is probably written in the my.cnf file. Check that its value is identical in the [mysqld] and [client] sections; if not, the client will look for a socket in a wrong place.

If unsure where the Unix socket file is running, it's possible to find this out, for example:

Unable to Connect from a Remote Location

Usually, the MariaDB server does not by default accept connections from a remote client or connecting with tcp and a hostname and has to be configured to permit these.

To solve this, see

Authentication Problems

The is enabled by default on Unix-like systems. This uses operating system credentials when connecting to MariaDB via the local Unix socket file. See for instructions on connecting and on switching to password-based authentication as well as for an overview.

Authentication is granted to a particular username/host combination. user1'@'localhost', for example, is not the same as user1'@'166.78.144.191'. See the article for details on granting permissions.

Passwords are hashed with function. If you have set a password with the statement, the PASSWORD function must be used at the same time. For example, SET PASSWORD FOR 'bob'@'%.loc.gov' = PASSWORD('newpass') rather than just SET PASSWORD FOR 'bob'@'%.loc.gov' = 'newpass' .

Problems Exporting Query Results

If you can run regular queries but get an authentication error when running the , or statements, you do not have permission to write files to the server. This requires the FILE privilege. See the article.

Access to the Server, but not to a Database

If you can connect to the server, but not to a database, for example:

or can connect to a particular database, but not another, for examplemariadb -uname -p -u name db1 works but not mariadb -uname -p -u name db2, you have not been granted permission for the particular database. See the article.

Option Files and Environment Variables

It's possible that option files or environment variables may be providing incorrect connection parameters. Check the values provided in any option files read by the client you are using (see and the documentation for the particular client you're using - see ).

Option files can usually be suppressed with no-defaults option, for example:

Unable to Connect to a Running Server / Lost root Password

If you are unable to connect to a server, for example because you have lost the root password, you can start the server without using the privilege tables by running the option, which gives users full access to all tables. You can then run to resume using the grant tables, followed by to change the password for an account.

localhost and %

You may have created a user with something like:

This creates a user with the '%' wildcard host.

However, you may still be failing to login from localhost. Some setups create anonymous users, including localhost. So, the following records exist in the user table:

Since you are connecting from localhost, the anonymous credentials, rather than those for the 'melisa' user, are used. The solution is either to add a new user specific to localhost, or to remove the anonymous localhost user.

See Also

CC BY-SA / Gnu FDL

Restoring Individual Tables and Partitions with mariadb-backup

Restore specific tables from a backup. Learn the process of importing individual .ibd files to recover specific tables without restoring the whole database.

When using mariadb-backup, you don't necessarily need to restore every table and/or partition that was backed up. Even if you're starting from a full backup, it is certainly possible to restore only certain tables and/or partitions from the backup, as long as the table or partition involved is in an InnoDB file-per-table tablespace. This page documents how to restore individual tables and partitions.

Preparing the Backup

Before you can restore from a backup, you first need to prepare it to make the data files consistent. You can do so with the --prepare option.

The ability to restore individual tables and partitions relies on InnoDB's transportable tablespaces. For MariaDB to import tablespaces like these, InnoDB looks for a file with a .cfg extension. For mariadb-backup to create these files, you also need to add the --export option during the prepare step.

For example, you might execute the following command:

If this operation completes without error, then the backup is ready to be restored.

Note

mariadb-backup did not support the --export option to begin with. See about that. In earlier versions of MariaDB, this means that mariadb-backup could not create .cfg files for InnoDB file-per-table tablespaces during the --prepare stage. You can still import file-per-table tablespaces without the .cfg files in many cases, so it may still be possible in those versions to restore partial backups or to restore individual tables and partitions with just the .ibd files. If you have a full backup and you need to create .cfg files for InnoDB file-per-table tablespaces, then you can do so by preparing the backup as usual without the --export option, and then restoring the backup, and then starting the server. At that point, you can use the server's built-in features to copy the transportable tablespaces.

Restoring the Backup

The restore process for restoring individual tables and/or partitions is quite different than the process for full backups.

Rather than using the --copy-back or the --move-back, each individual InnoDB file-per-table tablespace file will have to be manually imported into the target server. The process that is used to restore the backup will depend on whether partitioning is involved.

Restoring Individual Non-Partitioned Tables

To restore individual non-partitioned tables from a backup, find the .ibd and .cfg files for the table in the backup, and then import them using the Importing Transportable Tablespaces for Non-partitioned Tables process.

Restoring Individual Partitions and Partitioned Tables

To restore individual partitions or partitioned tables from a backup, find the .ibd and .cfg files for the partitions in the backup, and then import them using the process.

This page is licensed: CC BY-SA / Gnu FDL

Files Backed Up By mariadb-backup

List of file types included in a backup. Understand which data files, logs, and configuration files are preserved during the backup process.

Files Included in Backup

mariadb-backup backs up the files listed below.

InnoDB Data Files

mariadb-backup backs up the following InnoDB data files:

MyRocks Data Files

mariadb-backup will back up tables that use the storage engine. This data is located in the directory defined by the system variable. mariadb-backup backs this data up by performing a checkpoint using the system variable.

mariadb-backup will back up tables that use the MyRocks storage engine.

Other Data Files

mariadb-backup also backs up files with the following extensions:

  • frm

  • isl

  • MYD

  • MYI

Files Excluded From Backup

mariadb-backup does not back up the files listed below.

This page is licensed: CC BY-SA / Gnu FDL

Partitioning Types Overview

An introduction to the various partitioning strategies available in MariaDB, helping you choose the right method for your data distribution needs.

A partitioning type determines how a partitioned table's rows are distributed across partitions. Some partition types require the user to specify a partitioning expression that determines in which partition a row are stored.

The size of individual partitions depends on the partitioning type. Read and write performance are affected by the partitioning expression. Therefore, these choices should be made carefully.

Partitioning Types

MariaDB supports the following partitioning types:

See Also

This page is licensed: CC BY-SA / Gnu FDL

LINEAR KEY Partitioning Type

Learn about LINEAR KEY partitioning, which combines the internal key hashing with a linear algorithm for efficient partition handling.

Syntax

LINEAR PARTITION BY KEY ([column_names])
[PARTITIONS (number_of_partitions)]

Description

LINEAR KEY partitioning is a form of , similar to .

LINEAR KEY partitioning makes use of a powers-of-two algorithm, while KEY partitioning uses modulo arithmetic to determine the partition number.

Adding, dropping, merging and splitting partitions is much faster than with the ; however, data is less likely to be evenly distributed over the partitions.

Example

This page is licensed: CC BY-SA / Gnu FDL

Using CONNECT - Exporting Data From MariaDB

The CONNECT storage engine has been deprecated.

This storage engine has been deprecated.

Exporting data from MariaDB is obviously possible with CONNECT in particular for all formats not supported by the SELECT INTO OUTFILE statement. Let us consider the query:

SELECT
    plugin_name AS handler,
    plugin_version AS version,
    plugin_author AS author,
    plugin_description AS description,
    plugin_maturity AS maturity
FROM
    information_schema.plugins
WHERE
    plugin_type = 'STORAGE ENGINE';

Supposing you want to get the result of this query into a file handlers.htm in XML/HTML format, allowing displaying it on an Internet browser, this is how you can do it:

Just create the CONNECT table that are used to make the file:

CREATE TABLE handout
ENGINE=CONNECT
table_type=XML
file_name='handout.htm'
header=yes
option_list='name=TABLE,coltype=HTML,attribute=border=1;cellpadding=5,headattr=bgcolor=yellow'
AS
SELECT
    plugin_name AS handler,
    plugin_version AS version,
    plugin_author AS author,
    plugin_description AS description,
    plugin_maturity AS maturity
FROM
    information_schema.plugins
WHERE
    plugin_type = 'STORAGE ENGINE';

Here the column definition is not given and will come from the Select statement following the Create. The CONNECT options are the same we have seen previously. This will do both actions, creating the matching handlers CONNECT table and 'filling' it with the query result.

Note 1: This could not be done in only one statement if the table type had required using explicit CONNECT column options. In this case, firstly create the table, and then populate it with an Insert statement.

Note 2: The source “plugins” table column “description” is a long text column, data type not supported for CONNECT tables. It has been silently internally replaced by varchar(256).

This page is licensed: GPLv2

The Aria Name

A brief history of the naming of the Aria storage engine, explaining its origins as "Maria" and the reasons for the eventual name change.

The Aria storage engine used to be called Maria. This page gives the history and background of how and why this name was changed to Aria.

Backstory

When starting what became the MariaDB project, Monty and the initial developers only planned to work on a next generation MyISAM storage engine replacement. This storage engine would be crash safe and eventually support transactions. Monty named the storage engine, and the project, after his daughter, Maria.

Work began in earnest on the Maria storage engine but the plans quickly expanded and morphed and soon the developers were not just working on a storage engine, but on a complete branch of the MySQL database. Since the project was already called Maria, it made sense to call the whole database server MariaDB.

Renaming Maria (the engine)

So now there was the database, MariaDB, and the storage engine, Maria. To end the confusion this caused, the decision was made to rename the storage engine.

Monty's first suggestion was to name it Lucy, after his dog, but few who heard it liked that idea. So the decision was made that the next best thing was for the community to suggest and vote on names.

This was done by running a contest in 2009 through the end of May 2010. After that the best names were voted on by the community and Monty picked and (Aria) at OSCon 2010 in Portland.

The winning entry was submitted by Chris Tooley. He received a Linux-powered Meerkat NetTop as his prize for suggesting the winning name from Monty Program.

See Also

This page is licensed: CC BY-SA / Gnu FDL

Aria Two-step Deadlock Detection

Explains Aria's deadlock detection mechanism, which uses a two-step process with configurable search depths and timeouts to resolve conflicts.

Description

The Aria storage engine can automatically detect and deal with deadlocks (see the Wikipedia deadlocks article).

This feature is controlled by four configuration variables, two that control the search depth and two that control the timeout.

  • deadlock_search_depth_long

How it Works

If Aria is ever unable to obtain a lock, we might have a deadlock. There are two primary ways for detecting if a deadlock has actually occurred. First is to search a wait-for graph (see the ) and the second is to just wait and let the deadlock exhibit itself. Aria Two-step Deadlock Detection does a combination of both.

First, if the lock request cannot be granted immediately, we do a short search of the wait-for graph with a small search depth as configured by the deadlock_search_depth_short variable. We have a depth limit because the graph can (theoretically) be arbitrarily big and we don't want to recursively search the graph arbitrarily deep. This initial, short search is very fast and most deadlocks are detected right away. If no deadlock cycles are found with the short search the system waits for the amount of time configured in deadlock_timeout_short to see if the lock conflicts are removed and the lock can be granted. Assuming this did not happen and the lock request still waits, the system then moves on to step two, which is a repeat of the process but this time searching deeper using the deadlock_search_depth_long. If no deadlock has been detected, it waits deadlock_timeout_long and times out.

When a deadlock is detected the system uses a weighting algorithm to determine which thread in the deadlock should be killed and then kills it.

This page is licensed: CC BY-SA / Gnu FDL

USING CONNECT - Offline Documentation

The CONNECT storage engine has been deprecated.

This storage engine has been deprecated.

Note: You can download a PDF version of the CONNECT documentation (1.7.0003).

This page is licensed: CC BY-SA / Gnu FDL

Basic SQL Statements

A quick reference guide for essential SQL statements in MariaDB, categorized by Data Definition, Data Manipulation, and Transaction Control.

This page lists the most important SQL statements and contains links to their documentation pages. If you need a basic tutorial on how to use the MariaDB database server and how to execute simple commands, see .

Also see for examples of commonly-used queries.

Defining How Your Data Is Stored

Using Encryption and Compression Tools With mariadb-backup

Secure and compress backup streams. Learn to pipe backup output to tools like GPG and GZIP for encryption and storage efficiency.

mariadb-backup supports streaming to stdout with the --stream=xbstream option. This option allows easy integration with popular encryption and compression tools. Below are several examples.

Encrypting and Decrypting Backup With openssl

The following example creates an AES-encrypted backup, protected with the password "mypass" and stores it in a file "backup.xb.enc":

To decrypt and unpack this backup into the current directory, the following command can be used:

HASH Partitioning Type

Learn about HASH partitioning, which distributes data based on a user-defined expression to ensure an even spread of rows across partitions.

Syntax

Description

HASH

DROP PROCEDURE

The DROP PROCEDURE statement permanently removes a stored procedure and its associated privileges from the database.

Syntax

Description

This statement is used to drop a . That is, the specified routine is removed from the server along with all privileges specific to the . You must have the

Replication as a Backup Solution

Explore how to use replication as part of your backup strategy, allowing you to offload backup tasks to a replica server to reduce load on the primary.

can be used to support the strategy.

Replication alone is not sufficient for backup. It assists in protecting against hardware failure on the primary server, but does not protect against data loss. An accidental or malicious DROP DATABASE or TRUNCATE TABLE statement are replicated onto the replica as well. Care needs to be taken to prevent data getting out of sync between the primary and the replica.

Partitions Files

Learn how MariaDB stores partitioned tables on the filesystem, typically creating separate .ibd files for each partition when using InnoDB.

A partitioned table is stored in multiple files. By default, these files are stored in the MariaDB (or InnoDB) data directory. It is possible to keep them in different paths by specifying table options. This is useful to store different partitions on different devices.

Note that, if the server system variable is set to 0 at the time of the table creation, all partitions are stored in the system tablespace.

The following files exist for each partitioned tables:

File name
Notes

Stored Routine Limitations

Stored routines have specific restrictions, such as prohibiting certain SQL statements (e.g., LOAD DATA) and disallowing result sets in functions.

The following SQL statements are not permitted inside any (, , or ).

  • ; you can use instead.

  • and .

LIST Partitioning Type

Understand LIST partitioning, where rows are assigned to partitions based on whether a column value matches one in a defined list of values.

LIST partitioning is conceptually similar to . In both cases you decide a partitioning expression (a column, or a slightly more complex calculation) and use it to determine which partitions will contain each row. However, with the RANGE type, partitioning is done by assigning a range of values to each partition. With the LIST type, we assign a set of values to each partition. This is usually preferred if the partitioning expression can return a limited set of values.

A variant of this partitioning method, , allows us to use multiple columns and more datatypes.

Aria Group Commit

Learn about Aria's group commit functionality, which improves performance by batching commit operations to the transaction log.

The includes a feature to group commits to speed up concurrent threads doing many inserts into the same or different Aria tables.

By default, group commit for Aria is turned off. It is controlled by the and system variables.

Information on setting server variables can be found on the page.

Terminology

Using CONNECT - Condition Pushdown

The CONNECT storage engine has been deprecated.

This storage engine has been deprecated.

The , , , and WMI table types use engine condition pushdown in order to restrict the number of rows returned by the RDBS source or the WMI component.

The CONDITION_PUSHDOWN argument used in old versions of CONNECT is no longer needed because CONNECT uses condition pushdown unconditionally.

This page is licensed: GPLv2

Using CONNECT - Importing File Data Into MariaDB Tables

The CONNECT storage engine has been deprecated.

This storage engine has been deprecated.

Directly using external (file) data has many advantages, such as to work on “fresh” data produced for instance by cash registers, telephone switches, or scientific apparatus. However, you may want in some case to import external data into your MariaDB database. This is extremely simple and flexible using the CONNECT handler. For instance, let us suppose you want to import the data of the xsample.xml XML file previously given in example into a table called biblio belonging to the connect database. All you have to do is to create it by:

This last statement creates the table and inserts the original XML data, translated to tabular format by the xsampall2 CONNECT table, into the MariaDB biblio table. Note that further transformation on the data could have been achieved by using a more elaborate Select statement in the Create statement, for instance using filters, alias or applying functions to the data. However, because the Create Table process copies table data, later modifications of the

mariadb-backup --move-back --target-dir=/data/backups/full
mariadb-backup --backup \
      --target-dir=/data/backups/full \
      --user=mariadb-backup \
      --password=mbu_passwd \
      --parallel=12
mariadb-backup --backup \
      --incremental-basedir=/data/backups/inc1 \
      --target-dir=/data/backups/inc2 \
      --user=mariadb-backup \
      --password=mbu_passwd
mariadb-backup --prepare \
      --target-dir=/data/backups/full \
      --incremental-dir=/data/backups/inc1
mariadb-backup --prepare \
      --target-dir=/data/backups/full \
      --incremental-dir=/data/backups/inc2
mariadb-backup --copy-back --target-dir=/data/backups/full
chown -R mysql:mysql /var/lib/mysql
mariadb-backup --backup \
      --target-dir=/data/backups/full \
      --user=mariadb-backup \
      --password=mbu_passwd \
      --ssl-ca=/etc/my.cnf.d/certs/ca.pem \
      --ssl-cert=/etc/my.cnf.d/certs/client-cert.pem \
      --ssl-key=/etc/my.cnf.d/certs/client-key.pem
MariaDB [test]>
CREATE DATABASE IF NOT EXISTS test;

USE test;

CREATE TABLE IF NOT EXISTS books (
  BookID INT NOT NULL PRIMARY KEY AUTO_INCREMENT, 
  Title VARCHAR(100) NOT NULL, 
  SeriesID INT, AuthorID INT);

CREATE TABLE IF NOT EXISTS authors 
(id INT NOT NULL PRIMARY KEY AUTO_INCREMENT);

CREATE TABLE IF NOT EXISTS series 
(id INT NOT NULL PRIMARY KEY AUTO_INCREMENT);

INSERT INTO books (Title,SeriesID,AuthorID) 
VALUES('The Fellowship of the Ring',1,1), 
      ('The Two Towers',1,1), ('The Return of the King',1,1),  
      ('The Sum of All Men',2,2), ('Brotherhood of the Wolf',2,2), 
      ('Wizardborn',2,2), ('The Hobbbit',0,1);
SHOW TABLES;

+----------------+
| Tables_in_test |
+----------------+
| authors        |
| books          |
| series         |
+----------------+
3 rows in set (0.00 sec)
DESCRIBE books;

+----------+--------------+------+-----+---------+----------------+
| Field    | Type         | Null | Key | Default | Extra          |
+----------+--------------+------+-----+---------+----------------+
| BookID   | int(11)      | NO   | PRI | NULL    | auto_increment |
| Title    | varchar(100) | NO   |     | NULL    |                |
| SeriesID | int(11)      | YES  |     | NULL    |                |
| AuthorID | int(11)      | YES  |     | NULL    |                |
+----------+--------------+------+-----+---------+----------------+
SELECT * FROM books;

+--------+----------------------------+----------+----------+
| BookID | Title                      | SeriesID | AuthorID |
+--------+----------------------------+----------+----------+
|      1 | The Fellowship of the Ring |        1 |        1 |
|      2 | The Two Towers             |        1 |        1 |
|      3 | The Return of the King     |        1 |        1 |
|      4 | The Sum of All Men         |        2 |        2 |
|      5 | Brotherhood of the Wolf    |        2 |        2 |
|      6 | Wizardborn                 |        2 |        2 |
|      7 | The Hobbbit                |        0 |        1 |
+--------+----------------------------+----------+----------+
7 rows in set (0.00 sec)
INSERT INTO books (Title, SeriesID, AuthorID)
VALUES ("Lair of Bones", 2, 2);

Query OK, 1 row affected (0.00 sec)
SELECT * FROM books;
UPDATE books 
SET Title = "The Hobbit" 
WHERE BookID = 7;

Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0
mariadb -uname -p -uname -p
ERROR 2002 (HY000): Can't connect to local MySQL server through 
  socket '/var/run/mysqld/mysqld.sock' (2 "No such file or directory")
mariadb -uname -p --port=3307 --protocol=tcp
ERROR 2003 (HY000): Can't connect to MySQL server on  'localhost' 
  (111 "Connection refused")
Error 1698: Access denied for user
Getting, Installing and Upgrading MariaDB
Starting and Stopping MariaDB
Troubleshooting Installation Issues
Configuring MariaDB for Remote Client Access
unix_socket authentication plugin
unix_socket authentication plugin
Authentication
GRANT
PASSWORD
SET PASSWORD
SELECT ... INTO OUTFILE
SELECT ... INTO DUMPFILE
LOAD DATA INFILE
GRANT
GRANT
mariadbd Configuration Files and Groups
Clients and Utilities
--skip-grant-tables
FLUSH PRIVILEGES
SET PASSWORD
CREATE USER
GRANT
Authentication
Authentication from MariaDB 10 4 video tutorial
netstat -ln | grep mysqld
unix  2      [ ACC ]     STREAM     LISTENING     33209505 /var/run/mysqld/mysqld.sock
(/my/maria-10.4) ./client/mysql --host=myhost --protocol=tcp --port=3306 test
ERROR 2002 (HY000): Can't connect to MySQL server on 'myhost' (115)
(/my/maria-10.4) telnet myhost 3306
Trying 192.168.0.11...
telnet: connect to address 192.168.0.11: Connection refused
(/my/maria-10.4) perror 115
OS error code 115:  Operation now in progress
USE test;
ERROR 1044 (42000): Access denied for user 'ian'@'localhost' to database 'test'
mariadb-import --no-defaults ...
CREATE USER melisa IDENTIFIED BY 'password';
SELECT user,host FROM mysql.user WHERE user='melisa';
+--------+------+
| user   | host |
+--------+------+
| melisa | %    |
+--------+------+
SELECT user,host FROM mysql.user WHERE user='melisa' OR user='';
+--------+-----------+
| user   | host      |
+--------+-----------+
| melisa | %         |
|        | localhost |
+--------+-----------+

Retrieve, copy, or store the snapshot as is typical for your storage platform and as per business requirements to make the backup durable. This may require mounting the snapshot in some manner.

  • MAD

  • MAI

  • MRG

  • TRG

  • TRN

  • ARM

  • ARZ

  • CSM

  • CSV

  • opt

  • par

  • InnoDB system tablespace
    InnoDB file-per-table tablespaces
    MyRocks
    rocksdb_datadir
    rocksdb_create_checkpoint
    InnoDB Temporary Tablespaces
    Binary logs
    Relay logs
    is used to create a new, empty database.
  • DROP DATABASE is used to completely destroy an existing database.

  • USE is used to select a default database.

  • CREATE TABLE is used to create a new table, which is where your data is actually stored.

  • ALTER TABLE is used to modify an existing table's definition.

  • DROP TABLE is used to completely destroy an existing table.

  • DESCRIBE shows the structure of a table.

  • Manipulating Your Data

    • SELECT is used when you want to read (or select) your data.

    • INSERT is used when you want to add (or insert) new data.

    • UPDATE is used when you want to change (or update) existing data.

    • DELETE is used when you want to remove (or delete) existing data.

    • is used when you want to add or change (or replace) new or existing data.

    • is used when you want to empty (or delete) all data from the template.

    Transactions

    • START TRANSACTION is used to begin a transaction.

    • COMMIT is used to apply changes and end transaction.

    • ROLLBACK is used to discard changes and end transaction.

    A Simple Example

    The first version of this article was copied, with permission, from Basic_SQL_Statements on 2012-10-05.

    This page is licensed: CC BY-SA / Gnu FDL

    A MariaDB Primer
    Common MariaDB Queries
    CREATE DATABASE
    Compressing and Decompressing Backup With gzip

    This example compresses the backup without encrypting:

    We can decompress and unpack the backup as follows:

    Compressing and Encrypting Backup, Using gzip and openssl

    This example adds a compression step before the encryption, otherwise looks almost identical to the previous example:

    We can decrypt, decompress and unpack the backup as follow (note gzip -d in the pipeline):

    Compressing and Encrypting with 7Zip

    7zip archiver is a popular utility (especially on Windows) that supports reading from standard output, with the --si option, and writing to stdout with the -so option, and can thus be used together with mariadb-backup.

    Compressing backup with the 7z command line utility works as follows:

    Uncompress and unpack the archive with

    7z also has builtin AES-256 encryption. To encrypt the backup from the previous example using password SECRET, add -pSECRET to the 7z command line.

    Compressing with zstd

    Compress

    Decompress , unpack

    Encrypting With GPG

    Encryption

    Decrypt, unpack

    Interactive Input for Passphrases

    Most of the described tools also provide a way to enter a passphrase interactively (although 7zip does not seem to work well when reading input from stdin). Please consult documentation of the tools for more info.

    Writing extra status files

    By default files like xtrabackup_checkpoints are also written to the output stream only, and so would not be available for taking further incremental backups without prior extraction from the compressed or encrypted stream output file.

    To avoid this these files can additionally be written to a directory that can then be used as input for further incremental backups using the --extra-lsndir=... option.

    See also e.g: Combining incremental backups with streaming output

    This page is licensed: CC BY-SA / Gnu FDL

    partitioning is a form of
    in which the server takes care of the partition in which to place the data, ensuring an even distribution among the partitions.

    It requires a column value, or an expression based on a column value, which is hashed, as well as the number of partitions into which to divide the table.

    • partitioning_expression needs to return a non-constant, deterministic integer. It is evaluated for each insert and update, so overly complex expressions can lead to performance issues. A hashing function operating on a single column, and where the value changes consistently with the column value, allows for easy pruning on ranges of partitions, and is usually a better choice. For this reason, using multiple columns in a hashing expression is not usually recommended.

    • number_of_partitions is a positive integer specifying the number of partitions into which to divide the table. If the PARTITIONS clause is omitted, the default number of partitions is one.

    Determining the Partition

    To determine which partition to use, perform the following calculation:

    For example, if the expression is TO_DAYS(datetime_column) and the number of partitions is 5, inserting a datetime value of '2023-11-15' would determine the partition as follows:

    • TO_DAYS('2023-11-15') gives a value of 739204.

    • MOD(739204,5) returns 4, so the 4th partition is used.

    HASH partitioning makes use of the modulus of the hashing function's value. The LINEAR HASH partitioning type is similar, using a powers-of-two algorithm. Data is more likely to be evenly distributed over the partitions than with the LINEAR HASH partitioning type; however, adding, dropping, merging and splitting partitions is much slower.

    Examples

    Using the Information Schema PARTITIONS Table for more information:

    See Also

    • Partition Maintenance for suggestions on using partitions

    This page is licensed: CC BY-SA / Gnu FDL

    partitioning
    ALTER ROUTINE
    privilege for the routine. If the
    server system variable is set, that privilege and EXECUTE are granted automatically to the routine creator - see
    .

    The IF EXISTS clause is a MySQL/MariaDB extension. It prevents an error from occurring if the procedure or function does not exist. ANOTE is produced that can be viewed with SHOW WARNINGS.

    While this statement takes effect immediately, threads which are executing a procedure can continue execution.

    Examples

    IF EXISTS:

    See Also

    • DROP FUNCTION

    • Stored Procedure Overview

    • CREATE PROCEDURE

    • ALTER PROCEDURE

    This page is licensed: GPLv2, originally from fill_help_tables.sql

    stored procedure
    procedure
    automatic_sp_privileges
    Stored Routine Privileges

    The terms master and slave have historically been used in replication, and MariaDB has begun the process of adding primary and replica synonyms. The old terms will continue to be used to maintain backward compatibility - see MDEV-18777 to follow progress on this effort.

    Replication is most commonly used to support backups as follows:

    • A primary server replicates to a replica

    • Backups are then run off the replica without any impact on the primary.

    Backups can have a significant effect on a server, and a high-availability primary may not be able to be stopped, locked or simply handle the extra load of a backup. Running the backup from a replica has the advantage of being able to shutdown or lock the replica and perform a backup without any impact on the primary server.

    Note that when backing up off a replica server, it is important to ensure that the servers keep the data in sync. See for example Replication and Foreign Keys for a situation when identical statements can result in different data on a replica and a primary.

    See Also

    • Replication

    • Backup & Restore

    This page is licensed: CC BY-SA / Gnu FDL

    Replication
    backup

    table_name#P#partition_name.ext

    Normal files created by the storage engine use this pattern for names. The extension depends on the storage engine.

    For example, an InnoDB table with 4 partitions will have the following files:

    If we convert the table to MyISAM, we will have these files:

    This page is licensed: CC BY-SA / Gnu FDL

    table_name.frm

    Contains the table definition. Non-partitioned tables have this file, too.

    table_name.par

    DATA_DIRECTORY and INDEX_DIRECTORY
    innodb_file_per_table

    Contains the partitions definitions.

    orders.frm
    orders.par
    orders#P#p0.ibd
    orders#P#p1.ibd
    orders#P#p2.ibd
    orders#P#p3.ibd
    orders.frm
    orders.par
    orders#P#p0.MYD
    orders#P#p0.MYI
    orders#P#p1.MYD
    orders#P#p1.MYI
    orders#P#p2.MYD
    orders#P#p2.MYI
    orders#P#p3.MYD
    orders#P#p3.MYI
  • INSERT DELAYED is permitted, but the statement is handled as a regular INSERT.

  • LOCK TABLES and UNLOCK TABLES.

  • References to local variables within prepared statements inside a stored routine (use user-defined variables instead).

  • BEGIN (WORK) is treated as the beginning of a BEGIN END block, not a transaction, so START TRANSACTION needs to be used instead.

  • The number of permitted recursive calls is limited to max_sp_recursion_depth. If this variable is 0 (default), recursivity is disabled. The limit does not apply to stored functions.

  • Most statements that are not permitted in prepared statements are not permitted in stored programs. See Prepare Statement:Permitted statements for a list of statements that can be used. SIGNAL, RESIGNAL and GET DIAGNOSTICS are exceptions, and may be used in stored routines.

  • There are also further limitations specific to the kind of stored routine.

    Note that, if a stored program calls another stored program, the latter will inherit the caller's limitations. So, for example, if a stored procedure is called by a stored function, that stored procedure will not be able to produce a result set, because stored functions can't do this.

    See Also

    • Stored Function Limitations

    • Trigger Limitations

    • Event Limitations

    This page is licensed: CC BY-SA / Gnu FDL

    stored routines
    stored functions
    stored procedures
    events
    triggers
    ALTER VIEW
    CREATE OR REPLACE VIEW
    LOAD DATA
    LOAD TABLE
    CHANGE MASTER TO
    Syntax

    The last part of a CREATE TABLE statement can be the definition of the new table's partitions. In the case of LIST partitioning, the syntax is as follows:

    PARTITION BY LIST indicates that the partitioning type is LIST.

    The partitioning_expression is an SQL expression that returns a value from each row. In the simplest cases, it is a column name. This value is used to determine which partition should contain a row.

    partition_name is the name of a partition.

    value_list is a list of values. If partitioning_expression returns one of these values, the row are stored in this partition. If we try to insert something that does not belong to any of these value lists, the row are rejected with an error.

    The DEFAULT partition catches all records which do not fit into other partitions.

    Use Cases

    LIST partitioning can be useful when we have a column that can only contain a limited set of values. Even in that case, RANGE partitioning could be used instead; but LIST partitioning allows us to equally distribute the rows by assigning a proper set of values to each partition.

    Example

    This page is licensed: CC BY-SA / Gnu FDL

    RANGE partitioning
    LIST COLUMNS

    A commit is flush of logs followed by a sync.

  • sent to disk means written to disk but not sync()ed,

  • flushed mean sent to disk and synced().

  • LSN means log serial number. It's refers to the position in the transaction log.

  • Non Group commit logic (aria_group_commit="none")

    The thread which first started the commit is performing the actual flush of logs. Other threads set the new goal (LSN) of the next pass (if it is maximum) and wait for the pass end or just wait for the pass end.

    The effect of this is that a flush (write of logs + sync) will save all data for all threads/transactions that have been waiting since the last flush.

    If hard group commit is enabled (aria_group_commit="hard")

    If hard commit and aria_group_commit_interval=0

    The first thread sends all changed buffers to disk. This is repeated as long as there are new LSNs added. The process can not loop forever because we have a limited number of threads and they will wait for the data to be synced.

    Pseudo code:

    If hard commit and aria_group_commit_interval > 0

    If less than rate microseconds has passed since the last sync, then after buffers have been sent to disk, wait until rate microseconds has passed since last sync, do sync and return. This ensures that if we call sync infrequently we don't do any waits.

    If soft group commit is enabled (aria_group_commit="soft")

    Note that soft group commit should only be used if you can afford to lose a few rows if your machine shuts down hard (as in the case of a power failure).

    Works like in non group commit' but the thread doesn't do any real sync(). If aria_group_commit_interval is not zero, the sync() calls are performed by a service thread with the given rate when needed (new LSN appears). If aria_group_commit_interval is zero, there are no sync() calls.

    Code

    The code for this can be found in storage/maria/ma_loghandler.c::translog_flush().

    This page is licensed: CC BY-SA / Gnu FDL

    Aria storage engine
    aria_group_commit
    aria_group_commit_interval
    Server System Variables
    xsample.xml
    file will not change the
    biblio
    table and changes to the
    biblio
    table will not modify the
    xsample.xml
    file.

    All these can be combined or transformed by further SQL operations. This makes working with CONNECT much more flexible than just using the LOAD statement.

    This page is licensed: GPLv2

    CREATE TABLE biblio ENGINE=myisam SELECT * FROM xsampall2;
    MyISAM
    MyISAM
    MDEV-13466
    Importing Transportable Tablespaces for Partitioned Tables
    CREATE OR REPLACE TABLE t1 (v1 INT)
      PARTITION BY LINEAR KEY (v1)
      PARTITIONS 2;
    partitioning
    KEY partitioning
    KEY partitioning type
    RANGE
    LIST
    RANGE COLUMNS and LIST COLUMNS
    HASH
    LINEAR HASH
    KEY
    LINEAR KEY
    SYSTEM_TIME
    Partitioning Overview
    announced the winner
    System 76
    deadlock_search_depth_short
    deadlock_timeout_long
    deadlock_timeout_short
    wait-for graph on Wikipedia
    ODBC
    JDBC
    MYSQL
    TBL

    Backup and Restore Overview

    This guide provides an introduction to the various backup and restore methods available in MariaDB, helping you choose the right strategy for your data.

    This article briefly discusses the main ways to backup MariaDB. For detailed descriptions and syntax, see the individual pages. More detail is in the process of being added.

    Logical vs Physical Backups

    Logical backups consist of the SQL statements necessary to restore the data, such as CREATE DATABASE, CREATE TABLE and INSERT.

    Physical backups are performed by copying the individual data files or directories.

    The main differences are as follows:

    • logical backups are more flexible, as the data can be restored on other hardware configurations, MariaDB versions or even on another DBMS, while physical backups cannot be imported on significantly different hardware, a different DBMS, or potentially even a different MariaDB version.

    • logical backups can be performed at the level of database and table, while physical databases are the level of directories and files. In the and storage engines, each table has an equivalent set of files.

    • logical backups are larger in size than the equivalent physical backup.

    • logical backups takes more time to both backup and restore than the equivalent physical backup.

    • log files and configuration files are not part of a logical backup

    Backup Tools

    mariadb-backup

    The program is a fork of with added support for and .

    mariadb-dump

    (previously mysqldump) performs a logical backup. It is the most flexible way to perform a backup and restore, and a good choice when the data size is relatively small.

    For large datasets, the backup file can be large, and the restore time lengthy.

    mariadb-dump dumps the data into SQL format (it can also dump into other formats, such as CSV or XML) which can then easily be imported into another database. The data can be imported into other versions of MariaDB, MySQL, or even another DBMS entirely, assuming there are no version or DBMS-specific statements in the dump.

    mariadb-dump dumps triggers along with tables, as these are part of the table definition. However, , , and are not, and need extra parameters to be recreated explicitly (for example, --routines and --events). and are however also part of the system tables (for example ).

    InnoDB Logical Backups

    InnoDB uses the , which stores data and indexes from its tables in memory. This buffer is very important for performance. If InnoDB data doesn't fit the memory, it is important that the buffer contains the most frequently accessed data. However, last accessed data is candidate for insertion into the buffer pool. If not properly configured, when a table scan happens, InnoDB may copy the whole contents of a table into the buffer pool. The problem with logical backups is that they always imply full table scans.

    An easy way to avoid this is by increasing the value of the system variable. It represents the number of milliseconds that must pass before a recently accessed page can be put into the "new" sublist in the buffer pool. Data which is accessed only once should remain in the "old" sublist. This means that they will soon be evicted from the buffer pool. Since during the backup process the "old" sublist is likely to store data that is not useful, one could also consider resizing it by changing the value of the system variable.

    It is also possible to explicitly dump the buffer pool on disk before starting a logical backup, and restore it after the process. This will undo any negative change to the buffer pool which happens during the backup. To dump the buffer pool, the system variable can be set to ON. To restore it, the system variable can be set to ON.

    mariadb-dump Examples

    Backing up a single database

    Restoring or loading the database

    See the page for detailed syntax and examples.

    mariadb-hotcopy

    performs a physical backup, and works only for backing up and tables. It can only be run on the same machine as the location of the database directories.

    mariadb-hotcopy Examples

    Percona XtraBackup

    Percona XtraBackup is not supported in MariaDB. is the recommended backup method to use instead of Percona XtraBackup. See for more information.

    is a tool for performing fast, hot backups. It was designed specifically for databases, but can be used with any storage engine (although not with and ). It is not included with MariaDB.

    Filesystem Snapshots

    Some filesystems, like Veritas, support snapshots. During the snapshot, the table must be locked. The proper steps to obtain a snapshot are:

    • From the client, execute . The client must remain open.

    • From a shell, execute mount vxfs snapshot

    • The client can execute .

    • Copy the snapshot files.

    LVM

    Widely-used physical backup method, using a Perl script as a wrapper. See for more information.

    Percona TokuBackup

    For details, see:

    dbForge Studio for MySQL

    Besides the system utilities, it is possible to use third-party GUI tools to perform backup and restore operations. In this context, it is worth mentioning dbForge Studio for MySQL, a feature-rich database IDE that is fully compatible with MariaDB and delivers extensive backup functionality.

    The backup and restore module of the Studio allows precise up to particular database objects. The feature of scheduling regular backups offers specific settings to handle errors and keep a log of them. Additionally, settings and configurations can be saved for later reuse.

    These operations are wizard-aided allowing users to set up all tasks in a visual mode.

    See Also

    • (mariadb.com blog)

    This page is licensed: CC BY-SA / Gnu FDL

    Partial Backup and Restore with mariadb-backup

    Back up specific databases or tables. This guide explains how to filter your backup to include only the data you need.

    When using mariadb-backup, you have the option of performing partial backups. Partial backups allow you to choose which databases or tables to backup, as long as the table or partition involved is in an InnoDB file-per-table tablespace.This page documents how to perform partial backups.

    Backing up the Database Server

    Just like with full backups, in order to back up the database, you need to run mariadb-backup with the --backup option to tell it to perform a backup and with the --target-dir option to tell it where to place the backup files. The target directory must be empty or not exist.

    For a partial backup, there are a few other arguments that you can provide as well:

    • To tell it which databases to backup, you can provide the --databases option.

    • To tell it which databases to exclude from the backup, you can provide the --databases-exclude option.

    • To tell it to check a file for the databases to backup, you can provide the --databases-file option.

    The non-file partial backup options support regex in the database and table names.

    For example, to take a backup of any database that starts with the string app1_ and any table in those databases that start with the string tab_, run the following command:

    Using --history with Partial Backups

    You can use the --history option with a partial backup to log the operation in the history table for auditing purposes.

    You cannot use a partial backup as the base for an incremental backup history chain. The --incremental-history-name option is incompatible with partial backups because restoring partial incrementals requires specific preparation steps (--export) that the history feature does not automate.

    mariadb-backup cannot back up a subset of partitions from a partitioned table. Backing up a partitioned table is an all-or-nothing selection. See about that. If you need to backup a subset of partitions, one possibility is that instead of using mariadb-backup, you can export the file-per-table tablespaces of the partitions.

    The time the backup takes depends on the size of the databases or tables you're backing up. You can cancel the backup if you need to, as the backup process does not modify the database.

    mariadb-backup writes the backup files to the target directory. If the target directory doesn't exist, then it creates it. If the target directory exists and contains files, then it raises an error and aborts.

    Preparing the Backup

    Just like with full backups, the data files that mariadb-backup creates in the target directory are not point-in-time consistent, given that the data files are copied at different times during the backup operation. If you try to restore from these files, InnoDB notices the inconsistencies and crashes to protect you from corruption. In fact, for partial backups, the backup is not even a completely functional MariaDB data directory, so InnoDB would raise more errors than it would for full backups. This point will also be very important to keep in mind during the restore process.

    Before you can restore from a backup, you first need to prepare it to make the data files consistent. You can do so with the --prepare command option.

    Partial backups rely on InnoDB's transportable tablespaces. For MariaDB to import tablespaces like these, InnoDB looks for a file with a .cfg extension. For mariadb-backup to create these files, you also need to add the --export option during the prepare step.

    For example, you might execute the following command:

    If this operation completes without error, then the backup is ready to be restored.

    mariadb-backup did not support the --export option. See about that. This means that mariadb-backup could not create .cfg files for InnoDB file-per-table tablespaces during the --prepare stage. You can still import file-per-table tablespaces without the .cfg files in many cases, so it may still be possible in those versions to restore partial backups or to restore individual tables and partitions with just the .ibd files. If you have a full backup and you need to create .cfg files for InnoDB file-per-table tablespaces, then you can do so by preparing the backup as usual without the --export option, and then restoring the backup, and then starting the server. At that point, you can use the server's built-in features to copy the transportable tablespaces.

    Restoring the Backup

    The restore process for partial backups is quite different than the process for full backups. A partial backup is not a completely functional data directory. The data dictionary in the InnoDB system tablespace will still contain entries for the databases and tables that were not included in the backup.

    Rather than using the --copy-back or the --move-back, each individual InnoDB file-per-table tablespace file will have to be manually imported into the target server. The process that is used to import the file will depend on whether partitioning is involved.

    Restoring Individual Non-Partitioned Tables

    To restore individual non-partitioned tables from a backup, find the .ibd and .cfg files for the table in the backup, and then import them using the Importing Transportable Tablespaces for Non-partitioned Tables process.

    Restoring Individual Partitions and Partitioned Tables

    To restore individual partitions or partitioned tables from a backup, find the .ibd and .cfg files for the partitions in the backup, and then import them using the process.

    This page is licensed: CC BY-SA / Gnu FDL

    Partition Pruning and Selection

    Understand how the optimizer automatically prunes irrelevant partitions and how to explicitly select partitions in your queries for efficiency.

    When a WHERE clause is related to the partitioning expression, the optimizer knows which partitions are relevant for the query. Other partitions will not be read. This optimization is called partition pruning.

    EXPLAIN PARTITIONS can be used to know which partitions are read for a given query. A column called partitions will contain a comma-separated list of the accessed partitions. For example:

    EXPLAIN PARTITIONS SELECT * FROM orders WHERE id < 15000000;
    +------+-------------+--------+------------+-------+---------------+---------+---------+------+------+-------------+
    | id   | select_type | table  | partitions | type  | possible_keys | key     | key_len | ref  | rows | Extra       |
    +------+-------------+--------+------------+-------+---------------+---------+---------+------+------+-------------+
    |    1 | SIMPLE      | orders | p0,p1      | range | PRIMARY       | PRIMARY | 4       | NULL |    2 | Using where |
    +------+-------------+--------+------------+-------+---------------+---------+---------+------+------+-------------+

    Sometimes the WHERE clause does not contain the necessary information to use partition pruning, or the optimizer cannot infer this information. However, we may know which partitions are relevant for the query. We can force MariaDB to only access the specified partitions by adding a PARTITION clause. This feature is called partition selection. For example:

    The PARTITION clause is supported for all DML statements:

    Partition Pruning and Triggers

    In general, partition pruning is applied to statements contained in .

    However, note that if a BEFORE INSERT or BEFORE UPDATE trigger is defined on a table, MariaDB doesn't know in advance if the columns used in the partitioning expression are changed. For this reason, it is forced to lock all partitions.

    This page is licensed: CC BY-SA / Gnu FDL

    RANGE COLUMNS and LIST COLUMNS Partitioning Types

    Discover these variants that allow partitioning based on multiple columns and non-integer types, offering greater flexibility than standard RANGE/LIST.

    RANGE COLUMNS and LIST COLUMNS are variants of, respectively, RANGE and LIST. With these partitioning types, there is not a single partitioning expression; instead, a list of one or more columns is accepted. The following rules apply:

    • The list can contain one or more columns.

    • Columns can be of any integer, string, DATE, and DATETIME types.

    • Only bare columns are permitted; no expressions.

    All the specified columns are compared to the specified values to determine which partition should contain a specific row. See below for details.

    Syntax

    The last part of a statement can be definition of the new table's partitions. In the case of RANGE COLUMNS partitioning, the syntax is as follows:

    The syntax for LIST COLUMNS is as follows:

    partition_name is the name of a partition.

    Comparisons

    To determine which partition should contain a row, all specified columns are compared to each partition definition.

    With LIST COLUMNS, a row matches a partition if all row values are identical to the specified values. At most one partition can match the row.

    With RANGE COLUMNS, a row matches a partition if it is less than the specified value tuple in lexicographic order. The first partition that matches the row values are used.

    The DEFAULT partition catches all records which do not fit in other partitions. Only one DEFAULT partition is allowed.

    Examples

    RANGE COLUMNS partition:

    LIST COLUMNS partition:

    This page is licensed: CC BY-SA / Gnu FDL

    Partitioning Limitations

    This page outlines constraints when using partitioning, such as the maximum number of partitions and restrictions on foreign keys and query cache usage.

    The following limitations apply to partitioning in MariaDB:

    • Each table can contain a maximum of 8192 partitions.

    • Queries are never parallelized, even when they involve multiple partitions.

    • A table can only be partitioned if the storage engine supports partitioning.

    • All partitions must use the same storage engine. For a workaround, see .

    • A partitioned table cannot contain, or be referenced by, .

    • The is not aware of partitioning and partition pruning. Modifying a partition will invalidate the entries related to the whole table.

    • Updates can run more slowly when and a partitioned table is updated than an equivalent update of a non-partitioned table.

    • All columns used in the partitioning expression for a partitioned table must be part of every unique key that the table may have.

    • In versions prior to , it is not possible to create partitions on tables that contain .

    See Also

    • contains information about existing partitions.

    • for suggestions on using partitions

    This page is licensed: CC BY-SA / Gnu FDL

    DROP FUNCTION

    The DROP FUNCTION statement removes a stored function from the database, deleting its definition and associated privileges.

    Syntax

    Description

    The DROP FUNCTION statement is used to drop a or a user-defined function (UDF). That is, the specified routine is removed from the server, along with all privileges specific to the function. You must have the ALTER ROUTINE for the routine in order to drop it. If the server system variable is set, both the ALTER ROUTINE and EXECUTE privileges are granted automatically to the routine creator - see .

    IF EXISTS

    The IF EXISTS clause is a MySQL/MariaDB extension. It prevents an error from occurring if the function does not exist. ANOTE is produced that can be viewed with .

    For dropping a (UDF), see .

    Examples

    See Also

    This page is licensed: GPLv2, originally from

    Stored Function Limitations

    This page details the restrictions on stored functions, such as the inability to return result sets or use transaction control statements.

    The following restrictions apply to stored functions.

    • All of the restrictions listed in Stored Routine Limitations.

    • Any statements that return a result set are not permitted. For example, a regular SELECTs is not permitted, but a SELECT INTO is. A cursor and FETCH statement is permitted.

    • FLUSH statements are not permitted.

    • Statements that perform explicit or implicit commits or rollbacks are not permitted.

    • Cannot be used recursively.

    • Cannot make changes to a table that is already in use (reading or writing) by the statement invoking the stored function.

    • Cannot refer to a temporary table multiple times under different aliases, even in different statements.

    • ROLLBACK TO SAVEPOINT and RELEASE SAVEPOINT statement which are in a stored function cannot refer to a savepoint which has been defined out of the current function.

    • Prepared statements (, , ) cannot be used, and therefore nor can statements be constructed as strings and then executed.

    This page is licensed: CC BY-SA / Gnu FDL

    ALTER PROCEDURE

    The ALTER PROCEDURE statement modifies the characteristics of an existing stored procedure, such as its security context or comment, without changing its logic.

    Syntax

    Description

    This statement can be used to change the characteristics of a stored procedure. More than one change may be specified in an ALTER PROCEDURE statement. However, you cannot change the parameters or body of a stored procedure using this statement. To make such changes, you must drop and re-create the procedure using .

    You must have the ALTER ROUTINE privilege for the procedure. By default, that privilege is granted automatically to the procedure creator. See .

    Example

    See Also

    This page is licensed: GPLv2, originally from

    Introduction to the CONNECT Engine

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    CONNECT is not just a new “YASE” (Yet another Storage Engine) that provides another way to store data with additional features. It brings a new dimension to MariaDB, already one of the best products to deal with traditional database transactional applications, further into the world of business intelligence and data analysis, including NoSQL facilities. Indeed, BI is the set of techniques and tools for the transformation of raw data into meaningful and useful information. And where is this data?

    "It's amazing in an age where relational databases reign supreme when it comes to managing data that so much information still exists outside RDBMS engines in the form of flat files and other such constructs. In most enterprises, data is passed back and forth between disparate systems in a fashion and speed that would rival the busiest expressways in the world, with much of this data existing in common, delimited files. Target systems intercept these source files and then typically proceed to load them via ETL (extract, transform, load) processes into databases that then utilize the information for business intelligence, transactional functions, or other standard operations. ETL tasks and data movement jobs can consume quite a bit of time and resources, especially if large volumes of data are present that require loading into a database. This being the case, many DBAs welcome alternative means of accessing and managing data that exists in file format."

    • Robin Schumacher[]

    What he describes is known as MED (Management of External Data) enabling the handling of data not stored in a DBMS database as if it were stored in tables. An ISO standard exists that describes one way to implement and use MED in SQL by defining foreign tables for which an external FDW (Foreign Data Wrapper) has been developed in C.

    However, since this was written, a new source of data was developed as the “cloud”. Data are existing worldwide and, in particular, can be obtained in JSON or XML format in answer to REST queries. From , it is possible to create JSON, XML or CSV tables based on data retrieved from such REST queries.

    MED as described above is a rather complex way to achieve this goal and MariaDB does not support the ISO SQL/MED standard. But, to cover the need, possibly in transactional but mostly in decision support applications, the CONNECT storage engine supports MED in a much simpler way.

    The main features of CONNECT are:

    1. No need for additional SQL language extensions.

    2. Embedded wrappers for many external data types (files, data sources, virtual).

    3. NoSQL query facilities for , , HTML files and using JSON UDFs.

    4. NoSQL data obtained from REST queries (requires cpprestsdk).

    With CONNECT, MariaDB has one of the most advanced implementations of MED and NoSQL, without the need for complex additions to the SQL syntax (foreign tables are "normal" tables using the CONNECT engine).

    Giving MariaDB easy and natural access to external data enables the use of all of its powerful functions and SQL-handling abilities for developing business intelligence applications.

    With version 1.07 of CONNECT, retrieving data from REST queries is available in all binary distributed version of MariaDB, and, from 1.07.002, CONNECT allows workspaces greater than 4GB.

    1. Robin Schumacher is Vice President Products at DataStax and former Director of Product Management at MySQL. He has over 13 years of database experience in DB2, MySQL, Oracle, SQL Server and other database engines.

    Aria Storage Formats

    Understand the different row formats supported by Aria, particularly the default PAGE format which enables crash safety and better concurrency.

    The Aria storage engine supports three different table storage formats.

    These are FIXED, DYNAMIC and PAGE, and they can be set with the ROW FORMAT option in the CREATE TABLE statement. PAGE is the default format, while FIXED and DYNAMIC are essentially the same as the MyISAM formats.

    The SHOW TABLE STATUS statement can be used to see the storage format used by a table.

    Fixed-length

    Fixed-length (or static) tables contain records of a fixed-length. Each column is the same length for all records, regardless of the actual contents. It is the default format if a table has no BLOB, TEXT, VARCHAR or VARBINARY fields, and no ROW FORMAT is provided. You can also specify a fixed table with ROW_FORMAT=FIXED in the table definition.

    Tables containing BLOB or TEXT fields cannot be FIXED, as by design these are both dynamic fields.

    Fixed-length tables have a number of characteristics:

    • fast, since MariaDB will always know where a record begins

    • easy to cache

    • take up more space than dynamic tables, as the maximum amount of storage space are allocated to each record.

    • reconstructing after a crash is uncomplicated due to the fixed positions

    Dynamic

    Dynamic tables contain records of a variable length. It is the default format if a table has any BLOB, TEXT, VARCHAR or VARBINARY fields, and no ROW FORMAT is provided. You can also specify a DYNAMIC table with ROW_FORMAT=DYNAMIC in the table definition.

    Dynamic tables have a number of characteristics

    • Each row contains a header indicating the length of the row.

    • Rows tend to become fragmented easily. UPDATING a record to be longer will likely ensure it is stored in different places on the disk.

    • All string columns with a length of four or more are dynamic.

    • They require much less space than fixed-length tables.

    Page

    Page format is the default format for Aria tables, and is the only format that can be used if TRANSACTIONAL=1.

    Page tables have a number of characteristics:

    • It's cached by the page cache, which gives better random performance as it uses fewer system calls.

    • Does not fragment as easily as the DYNAMIC format during UPDATES. The maximum number of fragments are very low.

    • Updates more quickly than dynamic tables.

    Transactional

    See for the impact of the TRANSACTIONAL option on the row format.

    This page is licensed: CC BY-SA / Gnu FDL

    Binary Logging of Stored Routines

    When binary logging is enabled, stored routines may require special handling (like SUPER privileges) if they are non-deterministic, to ensure consistent replication.

    Binary logging can be row-based, statement-based, or a mix of the two. See Binary Log Formats for more details on the formats. If logging is statement-based, it is possible that a statement will have different effects on the master and on the slave.

    Stored routines are particularly prone to this, for two main reasons:

    • stored routines can be non-deterministic, in other words non-repeatable, and therefore have different results each time they are run.

    • the slave thread executing the stored routine on the slave holds full privileges, while this may not be the case when the routine was run on the master.

    The problems with replication will only occur with statement-based logging. If row-based logging is used, since changes are made to rows based on the master's rows, there is no possibility of the slave and master getting out of sync.

    By default, with row-based replication, triggers run on the master, and the effects of their executions are replicated to the slaves. However, it is possible to run triggers on the slaves. See .

    How MariaDB Handles Statement-Based Binary Logging of Routines

    If the following criteria are met, then there are some limitations on whether stored routines can be created:

    • The is enabled, and the system variable is set to STATEMENT. See for more information.

    • The is set to OFF, which is the default value.

    If the above criteria are met, then the following limitations apply:

    • When a is created, it must be declared as either DETERMINISTIC, NO SQL or READS SQL DATA, or else an error will occur. MariaDB cannot check whether a function is deterministic, and relies on the correct definition being used.

    • To create or modify a stored function, a user requires the SUPER privilege as well as the regular privileges. See for these details.

    Examples

    A deterministic function:

    A non-deterministic function, since it uses the function:

    This page is licensed: CC BY-SA / Gnu FDL

    Stored Routine Privileges

    This page explains the privileges required to create, alter, execute, and drop stored routines, including the automatic grants for creators.

    It's important to give careful thought to the privileges associated with stored functions and stored procedures. The following is an explanation of how they work.

    Creating Stored Routines

    • To create a stored routine, the CREATE ROUTINE privilege is needed. The SUPER privilege is required if a DEFINER is declared that's not the creator's account (see below). The SUPER privilege is also required if statement-based binary logging is used. See for more details.

    Altering Stored Routines

    • To make changes to, or drop, a stored routine, the privilege is needed. The creator of a routine is temporarily granted this privilege if they attempt to change or drop a routine they created, unless the variable is set to 0 (it defaults to 1).

    • The SUPER privilege is also required if statement-based binary logging is used. See for more details.

    Running Stored Routines

    • To run a stored routine, the privilege is needed. This is also temporarily granted to the creator if they attempt to run their routine unless the variable is set to 0.

    • The (by default DEFINER) specifies what privileges are used when a routine is called. If SQL SECURITY is INVOKER, the function body are evaluated using the privileges of the user calling the function. If SQL SECURITY is DEFINER, the function body is always evaluated using the privileges of the definer account.

    DEFINER Clause

    If left out, the DEFINER is treated as the account that created the stored routine or view. If the account creating the routine has the SUPER privilege, another account can be specified as the DEFINER.

    SQL SECURITY Clause

    This clause specifies the context the stored routine or view will run as. It can take two values - DEFINER or INVOKER. DEFINER is the account specified as the DEFINER when the stored routine or view was created (see the section above). INVOKER is the account invoking the routine or view.

    As an example, let's assume a routine, created by a superuser who's specified as the DEFINER, deletes all records from a table. If SQL SECURITY=DEFINER, anyone running the routine, regardless of whether they have delete privileges, are able to delete the records. If SQL SECURITY = INVOKER, the routine will only delete the records if the account invoking the routine has permission to do so.

    INVOKER is usually less risky, as a user cannot perform any operations they're normally unable to. However, it's not uncommon for accounts to have relatively limited permissions, but be specifically granted access to routines, which are then invoked in the DEFINER context.

    Dropping Stored Routines

    All privileges that are specific to a stored routine are dropped when a or DROP ROUTINE is run. However, if a or is used to drop and replace and the routine, any privileges specific to that routine will not be dropped.

    See Also

    • - maria.com post on what to do after you've dropped a user, and now want to change the DEFINER on all database objects that currently have it set to this dropped user.

    This page is licensed: CC BY-SA / Gnu FDL

    Aria Status Variables

    A list of status variables specific to the Aria engine, providing metrics on page cache usage, transaction log syncs, and other internal operations.

    This page documents status variables related to the Aria storage engine. See Server Status Variables for a complete list of status variables that can be viewed with SHOW STATUS.

    See also the Full list of MariaDB options, system and status variables.

    Aria_pagecache_blocks_not_flushed

    • Description: The number of dirty blocks in the Aria page cache. The global value can be flushed by .

    • Scope: Global

    • Data Type: numeric

    Aria_pagecache_blocks_unused

    • Description: Free blocks in the Aria page cache. The global value can be flushed by .

    • Scope: Global

    • Data Type: numeric

    Aria_pagecache_blocks_used

    • Description: Blocks used in the Aria page cache. The global value can be flushed by .

    • Scope: Global

    • Data Type: numeric

    Aria_pagecache_read_requests

    • Description: The number of requests to read something from the Aria page cache.

    • Scope: Global

    • Data Type: numeric

    Aria_pagecache_reads

    • Description: The number of Aria page cache read requests that caused a block to be read from the disk.

    • Scope: Global

    • Data Type: numeric

    Aria_pagecache_write_requests

    • Description: The number of requests to write a block to the Aria page cache.

    • Scope: Global

    • Data Type: numeric

    Aria_pagecache_writes

    • Description: The number of blocks written to disk from the Aria page cache.

    • Scope: Global

    • Data Type: numeric

    Aria_transaction_log_syncs

    • Description: The number of Aria log fsyncs.

    • Scope: Global

    • Data Type: numeric

    This page is licensed: CC BY-SA / Gnu FDL

    Connecting to MariaDB Server

    Learn the various parameters and options for connecting to a MariaDB server using the command-line client and other tools.

    This article covers connecting to MariaDB and the basic connection parameters. If you are completely new to MariaDB, take a look at first.

    In order to connect to the MariaDB server, the client software must provide the correct connection parameters. The client software will most often be the , used for entering statements from the command line, but the same concepts apply to any client, such as a , a client to run backups such as , etc. The rest of this article assumes that the mariadb command line client is used.

    If a connection parameter is not provided, it will revert to a default value.

    For example, to connect to MariaDB using only default values with the mariadb client, enter the following from the command line:

    In this case, the following defaults apply:

    • The host name is localhost

    Basic SQL Debugging

    Learn strategies for debugging SQL queries, including formatting for readability, using aliases effectively, and interpreting syntax errors.

    Designing Queries

    Following a few conventions makes finding errors in queries a lot easier, especially when you ask for help from people who might know SQL, but know nothing about your particular schema. A query easy to read is a query easy to debug. Use whitespace to group clauses within the query. Choose good table and field aliases to add clarity, not confusion. Choose the syntax that supports the query's meaning.

    Remote Client Access

    Configure MariaDB to accept remote connections by adjusting the bind-address directive and granting appropriate user privileges.

    Some MariaDB packages bind MariaDB to 127.0.0.1 (the loopback IP address) by default as a security measure using the configuration directive. Old MySQL packages sometimes disabled TCP/IP networking altogether using the directive. Before going in to how to configure these, let's explain what each of them actually does:

    • is fairly simple. It just tells MariaDB to run without any of the TCP/IP networking options.

    • requires a little bit of background information. A given server usually has at least two networking interfaces (although this is not required) and can easily have more. The two most common are a Loopback network device and a physical Network Interface Card (NIC) which allows you to communicate with the network. MariaDB is bound to the loopback interface by default because it makes it impossible to connect to the TCP port on the server from a remote host (the bind-address must refer to a local IP address, or you will receive a fatal error and MariaDB will not start). This of course is not desirable if you want to use the TCP port from a remote host, so you must remove this bind-address directive or replace it either

    Adding and Changing Data in MariaDB

    This guide provides a walkthrough of the INSERT, UPDATE, and DELETE statements, demonstrating how to add, modify, and remove data in tables.

    There are several ways to add and to change data in MariaDB. There are a few SQL statements that you can use, each with a few options. Additionally, there are twists that you can do by mixing SQL statements together with various clauses. In this article, we will explore the ways in which data can be added and changed in MariaDB.

    Adding Data

    To add data to a table in MariaDB, you will need to use the statement. Its basic, minimal syntax is the command INSERT followed by the table name and then the keyword VALUES with a comma separated list of values contained in parentheses:

    Stored Procedure Overview

    Stored procedures are precompiled collections of SQL statements stored on the server, allowing for encapsulated logic, parameterized execution, and improved application performance.

    A Stored Procedure is a routine invoked with a statement. It may have input parameters, output parameters and parameters that are both input parameters and output parameters.

    Creating a Stored Procedure

    Here's a skeleton example to see a stored procedure in action:

    First, the delimiter is changed, since the function definition will contain the regular semicolon delimiter. The procedure is named Reset_animal_count. MODIFIES SQL DATA indicates that the procedure will perform a write action of sorts, and modify data. It's for advisory purposes only. Finally, there's the actual SQL statement - an UPDATE.

    How mariadb-backup Works

    Deep dive into backup mechanics. Understand how the tool handles redo logs, locking, and file copying to ensure consistent backups.

    This is a description of the different stages in mariadb-backup, what they do and why they are needed.

    Execution Stages

    KEY Partitioning Type

    Understand KEY partitioning, similar to HASH but using MariaDB's internal hashing function on one or more columns to distribute data.

    Syntax

    Description

    Partitioning by key is a type of partitioning that is similar to and can be used in a similar way as .

    CREATE DATABASE mydb;
    USE mydb;
    CREATE TABLE mytable ( id INT PRIMARY KEY, name VARCHAR(20) );
    INSERT INTO mytable VALUES ( 1, 'Will' );
    INSERT INTO mytable VALUES ( 2, 'Marry' );
    INSERT INTO mytable VALUES ( 3, 'Dean' );
    SELECT id, name FROM mytable WHERE id = 1;
    UPDATE mytable SET name = 'Willy' WHERE id = 1;
    SELECT id, name FROM mytable;
    DELETE FROM mytable WHERE id = 1;
    SELECT id, name FROM mytable;
    DROP DATABASE mydb;
    SELECT COUNT(1) FROM mytable; gives the NUMBER OF records IN the TABLE
    mariadb-backup --user=root --backup --stream=xbstream  | openssl  enc -aes-256-cbc -k mypass > backup.xb.enc
    openssl  enc -d -aes-256-cbc -k mypass -in backup.xb.enc | mbstream -x
    mariadb-backup --user=root --backup --stream=xbstream | gzip > backupstream.gz
    gunzip -c backupstream.gz | mbstream -x
    mariadb-backup --user=root --backup --stream=xbstream | gzip | openssl  enc -aes-256-cbc -k mypass > backup.xb.gz.enc
    openssl  enc -d -aes-256-cbc -k mypass -in backup.xb.gz.enc |gzip -d| mbstream -x
    mariadb-backup --user=root --backup --stream=xbstream | 7z a -si backup.xb.7z
    7z e backup.xb.7z -so |mbstream -x
    mariadb-backup --user=root --backup --stream=xbstream  | zstd - -o backup.xb.zst -f -1
    zstd -d backup.xbstream.zst -c | mbstream -x
    mariadb-backup --user=root --backup --stream=xbstream | gpg -c --passphrase SECRET --batch --yes -o backup.xb.gpg
    gpg --decrypt --passphrase SECRET --batch --yes  backup.xb.gpg | mbstream -x
    PARTITION BY HASH (partitioning_expression)
    [PARTITIONS(number_of_partitions)]
    MOD(partitioning_expression, number_of_partitions)
    CREATE OR REPLACE TABLE t1 (c1 INT, c2 DATETIME) 
      PARTITION BY HASH(TO_DAYS(c2)) 
      PARTITIONS 5;
    INSERT INTO t1 VALUES (1,'2023-11-15');
    
    SELECT PARTITION_NAME,TABLE_ROWS FROM INFORMATION_SCHEMA.PARTITIONS 
      WHERE TABLE_SCHEMA='test' AND TABLE_NAME='t1';
    +----------------+------------+
    | PARTITION_NAME | TABLE_ROWS |
    +----------------+------------+
    | p0             |          0 |
    | p1             |          0 |
    | p2             |          0 |
    | p3             |          0 |
    | p4             |          1 |
    +----------------+------------+
    DROP PROCEDURE [IF EXISTS] sp_name
    DROP PROCEDURE simpleproc;
    DROP PROCEDURE simpleproc;
    ERROR 1305 (42000): PROCEDURE test.simpleproc does not exist
    
    DROP PROCEDURE IF EXISTS simpleproc;
    Query OK, 0 rows affected, 1 warning (0.00 sec)
    
    SHOW WARNINGS;
    +-------+------+------------------------------------------+
    | Level | Code | Message                                  |
    +-------+------+------------------------------------------+
    | Note  | 1305 | PROCEDURE test.simpleproc does not exist |
    +-------+------+------------------------------------------+
    PARTITION BY LIST (partitioning_expression)
    (
    	PARTITION partition_name VALUES IN (value_list),
    	[ PARTITION partition_name VALUES IN (value_list), ... ]
            [ PARTITION partition_name DEFAULT ]
    )
    CREATE OR REPLACE TABLE t1 (
      num TINYINT(1) NOT NULL
    )
      ENGINE = InnoDB
      PARTITION BY LIST (num) (
        PARTITION p0 VALUES IN (0,1),
        PARTITION p1 VALUES IN (2,3),
        PARTITION p2 DEFAULT
      );
    do
       send changed buffers to disk
     while new_goal
    sync
    $ mariadb-backup --prepare --export \
       --target-dir=/var/mariadb/backup/ \
       --user=mariadb-backup --password=mypassword
    SELECT * FROM orders PARTITION (p3) WHERE user_id = 50;
    SELECT * FROM orders PARTITION (p2,p3) WHERE user_id >= 40;
    DROP FUNCTION [IF EXISTS] f_name
    ALTER PROCEDURE proc_name [characteristic ...]
    
    characteristic:
        { CONTAINS SQL | NO SQL | READS SQL DATA | MODIFIES SQL DATA }
      | SQL SECURITY { DEFINER | INVOKER }
      | COMMENT 'string'

    mariadb-backup was previously called mariabackup.

    SHOW CREATE PROCEDURE
    SHOW PROCEDURE STATUS
    Information Schema ROUTINES Table
    REPLACE
    SELECT
    INSERT
    UPDATE
    DELETE
    triggers
    PREPARE
    EXECUTE
    DEALLOCATE PREPARE

    no fragmentation or need to re-organize, unless records have been deleted and you want to free the space up.

    Restoring after a crash is more complicated than with FIXED tables.

    Has a slight storage overhead, mainly notable on very small rows
  • Slower to perform a full table scan

  • Slower if there are multiple duplicated keys, as Aria will first write a row, then keys, and only then check for duplicates

  • Aria Storage Engine
    DEFINER
    is the default. Thus, by default, users who can access the database associated with the stored routine can also run the routine, and potentially perform operations they wouldn't normally have permissions for.
  • The creator of a routine is the account that ran the CREATE FUNCTION or CREATE PROCEDURE statement, regardless of whether a DEFINER is provided. The definer is by default the creator unless otherwise specified.

  • The server automatically changes the privileges in the mysql.proc table as required, but will not look out for manual changes.

  • DEFINER clause
    Binary Logging of Stored Routines
    ALTER ROUTINE
    automatic_sp_privileges
    Binary Logging of Stored Routines
    EXECUTE
    automatic_sp_privileges
    SQL SECURITY clause
    DROP FUNCTION
    CREATE OR REPLACE FUNCTION
    CREATE OR REPLACE PROCEDURE
    Changing the DEFINER of MySQL stored routines etc.
    FLUSH STATUS
    FLUSH STATUS
    FLUSH STATUS

    From a shell, unmount the snapshot with umount snapshot.

    MyISAM
    InnoDB
    mariadb-backup
    Percona XtraBackup
    compression
    data-at-rest encryption
    mariadb-dump
    stored procedures
    views
    events
    Procedures
    functions
    mysql.proc
    buffer pool
    innodb_old_blocks_time
    innodb_old_blocks_pct
    innodb_buffer_pool_dump_now
    innodb_buffer_pool_load_now
    mariadb-dump
    mariadb-hotcopy
    MyISAM
    ARCHIVE
    mariadb-backup
    Percona XtraBackup Overview: Compatibility with MariaDB
    Percona XtraBackup
    XtraDB/InnoDB
    encryption
    compression
    mariadb
    FLUSH TABLES WITH READ LOCK
    UNLOCK TABLES
    http://www.lenzg.net/mylvmbackup/
    TokuDB Hot Backup – Part 1
    TokuDB Hot Backup – Part 2
    TokuDB Hot Backup Now a MySQL Plugin
    configuration and management of full and partial backups
    Streaming MariaDB backups in the cloud
    CREATE TABLE
    ALTER FUNCTION
  • SHOW CREATE FUNCTION

  • SHOW FUNCTION STATUS

  • Stored Routine Privileges

  • INFORMATION_SCHEMA ROUTINES Table

  • stored function
    privilege
    automatic_sp_privileges
    Stored Routine Privileges
    SHOW WARNINGS
    user-defined functions
    DROP FUNCTION UDF
    DROP PROCEDURE
    Stored Function Overview
    CREATE FUNCTION
    CREATE FUNCTION UDF
    fill_help_tables.sql
    SHOW CREATE PROCEDURE
  • SHOW PROCEDURE STATUS

  • Stored Routine Privileges

  • Information Schema ROUTINES Table

  • CREATE OR REPLACE PROCEDURE
    Stored Routine Privileges
    Stored Procedure Overview
    CREATE PROCEDURE
    SHOW CREATE PROCEDURE
    DROP PROCEDURE
    fill_help_tables.sql
    Triggers work in the same way, except that they are always assumed to be deterministic for logging purposes, even if this is obviously not the case, such as when they use the UUID function.
  • Triggers can also update data. The slave uses the DEFINER attribute to determine which user is taken to have created the trigger.

  • Note that the above limitations do no apply to stored procedures or to events.

  • Running triggers on the slave for Row-based events
    binary log
    binlog_format
    Binary Log Formats
    log_bin_trust_function_creators
    stored function
    Stored Routine Privileges
    UUID_SHORT
    In this example, text is added to a table called table1, which contains only three columns—the same number of values that we're inserting. The number of columns must match. If you don't want to insert data into all of the columns of a table, though, you could name the columns desired:

    Notice that the keyword INTO was added here. This is optional and has no effect on MariaDB. It's only a matter of grammatical preference. In this example we not only name the columns, but we list them in a different order. This is acceptable to MariaDB. Just be sure to list the values in the same order. If you're going to insert data into a table and want to specify all of the values except one (say the key column since it's an auto-incremented one), then you could just give a value of DEFAULT to keep from having to list the columns. Incidentally, you can give the column names even if you're naming all of them. It's just unnecessary unless you're going to reorder them as we did in this last example.

    When you have many rows of data to insert into the same table, it can be more efficient to insert all of the rows in one SQL statement. Multiple row insertions can be done like so:

    Notice that the keyword VALUES is used only once and each row is contained in its own set of parentheses and each set is separated by commas. We've added an intentional mistake to this example: We are attempting to insert three rows of data into table2 for which the first column happens to be a UNIQUE key field. The third row entered here has the same identification number for the key column as the second row. This would normally result in an error and none of the three rows would be inserted. However, since the statement has an IGNORE flag, duplicates are ignored and not inserted, but the other rows will still be inserted. So, the first and second rows above are inserted and the third one won't.

    Priority

    An INSERT statement takes priority over read statements (i.e., SELECT statements). An INSERT will lock the table and force other clients to wait until it's finished. On a busy MariaDB server that has many simultaneous requests for data, this could cause users to experience delays when you run a script that performs a series of INSERT statements. If you don't want user requests to be put on hold and you can wait to insert the data, you could use the LOW_PRIORITY flag:

    The LOW_PRIORITY flag will put the INSERT statement in queue, waiting for all current and pending requests to be completed before it's performed. If new requests are made while a low priority statement is waiting, then they are put ahead of it in the queue. MariaDB does not begin to execute a low priority statement until there are no other requests waiting. Once the transaction begins, though, the table is locked and any other requests for data from the table that come in after it starts must wait until it's completed. Because it locks the table, low priority statements will prevent simultaneous insertions from other clients even if you're dealing with a MyISAM table. Incidentally, notice that the LOW_PRIORITY flag comes before the INTO.

    One potential inconvenience with an INSERT LOW_PRIORITY statement is that the client are tied up waiting for the statement to be completed successfully. So if you're inserting data into a busy server with a low priority setting using the mariadb client, your client could be locked up for minutes, maybe hours depending on how busy your server is at the time. As an alternative either to making other clients with read requests wait or to having your client wait, you can use the DELAYED flag instead of the LOW_PRIORITY flag:

    MariaDB will take the request as a low priority one and put it on its list of tasks to perform when it has a break. However, it will immediately release the client so that the client can go on to enter other SQL statements or even exit. Another advantage of this method is that multiple INSERT DELAYED requests are batched together for block insertion when there is a gap, making the process potentially faster than INSERT LOW_PRIORITY. The flaw in this choice, however, is that the client is never told if a delayed insertion is successfully made or not. The client is informed of error messages when the statement is entered—the statement has to be valid before it are queued—but it's not told of problems that occur after it's accepted. This brings up another flaw: delayed insertions are stored in the server's memory. So if the MariaDB daemon (mariadbd) dies or is manually killed, then the transactions are lost and the client is not notified of the failure. So DELAYED is not always a good alternative.

    Contingent Additions

    As an added twist to INSERT, you can combine it with a SELECT statement. Suppose that you have a table called employees which contains employee information for your company. Suppose further that you have a column to indicate whether an employee is on the company's softball team. However, you one day decide to create a separate database and table for the softball team's data that someone else will administer. To get the database ready for the new administrator, you have to copy some data for team members to the new table. Here's one way you can accomplish this task:

    In this SQL statement the columns in which data is to be inserted into are listed, then the complete SELECT statement follows with the appropriate WHERE clause to determine if an employee is on the softball team. Since we're executing this statement from the new database and since the table employees is in a separate database called company, we have to specify it as you see here. By the way, INSERT...SELECT statements cannot be performed on the same table.

    Replacement Data

    When you're adding massive amounts of data to a table that has a key field, as mentioned earlier, you can use the IGNORE flag to prevent duplicates from being inserted, but still allow unique rows to be entered. However, there may be times when you actually want to replace the rows with the same key fields with the new ones. In such a situation, instead of using INSERT you can use a REPLACE statement:

    Notice that the syntax is the same as an INSERT statement. The flags all have the same effect, as well. Also, multiple rows may be inserted, but there's no need for the IGNORE flag since duplicates won't happen—the originals are just overwritten. Actually, when a row is replaced, it's first deleted completely and the new row is then inserted. Any columns without values in the new row are given the default values for the columns. None of the values of the old row are kept. Incidentally, REPLACE will also allow you to combine it with a SELECT statement as we saw with the INSERT statement earlier.

    Updating Data

    If you want to change the data contained in existing records, but only for certain columns, then you would need to use an UPDATE statement. The syntax for UPDATE is a little bit different from the syntax shown before for INSERT and REPLACE statements:

    In the SQL statement here, we are changing the value of the two columns named individually using the SET clause. Incidentally, the SET clause optionally can be used in INSERT and REPLACE statements, but it eliminates the multiple row option. In the statement above, we're also using a WHERE clause to determine which records are changed: only rows with an id that has a value less than 100 are updated. Notice that the LOW_PRIORITY flag can be used with this statement, too. The IGNORE flag can be used, as well.

    A useful feature of the UPDATE statement is that it allows the use of the current value of a column to update the same column. For instance, suppose you want to add one day to the value of a date column where the date is a Sunday. You could do the following:

    For rows where the day of the week is Sunday, the DATE_ADD() function will take the value of col_date before it's updated and add one day to it. MariaDB will then take this sum and set col_date to it.

    There are a couple more twists that you can now do with the UPDATE statement: if you want to update the rows in a specific order, you can add an ORDER BY clause. You can also limit the number of rows that are updated with a LIMIT clause. Below is an example of both of these clauses:

    The ordering can be descending as indicated here by the DESC flag, or ascending with either the ASC flag or by just leaving it out, as ascending is the default. The LIMIT clause, of course, limits the number of rows affected to ten here.

    If you want to refer to multiple tables in one UPDATE statement, you can do so like this:

    Here we see a join between the two tables named. In table3, the value of col1 is set to the value of the same column in table4 where the values of id from each match. We're not updating both tables here; we're just accessing both. We must specify the table name for each column to prevent an ambiguity error. Incidentally, ORDER BY and LIMIT clauses aren't allowed with multiple table updates.

    There's another combination that you can do with the INSERT statement that we didn't mention earlier. It involves the UPDATE statement. When inserting multiple rows of data, if you want to note which rows had potentially duplicate entries and which ones are new, you could add a column called status and change it's value accordingly with a statement like this one:

    Because of the IGNORE flag, errors will not be generated, duplicates won't be inserted or replaced, but the rest are added. Because of the ON DUPLICATE KEY, the column status of the original row are set to old when there are duplicate entry attempts. The rest are inserted and their status set to new.

    Conclusion

    As you can see from some of these SQL statements, MariaDB offers you quite a few ways to add and to change data. In addition to these methods, there are also some bulk methods of adding and changing data in a table. You could use the LOAD DATA INFILE statement and the mariadb-dump command-line utility. These methods are covered in another article on Importing Data into MariaDB.

    This page is licensed: CC BY-SA / Gnu FDL

    INSERT

    A more complex example, with input parameters, from an actual procedure used by banks:

    See CREATE PROCEDURE for full syntax details.

    Why use Stored Procedures?

    Security is a key reason. Banks commonly use stored procedures so that applications and users don't have direct access to the tables. Stored procedures are also useful in an environment where multiple languages and clients are all used to perform the same operations.

    Stored Procedure listings and definitions

    To find which stored functions are running on the server, use SHOW PROCEDURE STATUS.

    or query the routines table in the INFORMATION_SCHEMA database directly:

    To find out what the stored procedure does, use SHOW CREATE PROCEDURE.

    Dropping and Updating a Stored Procedure

    To drop a stored procedure, use the DROP PROCEDURE statement.

    To change the characteristics of a stored procedure, use ALTER PROCEDURE. However, you cannot change the parameters or body of a stored procedure using this statement; to make such changes, you must drop and re-create the procedure using CREATE OR REPLACE PROCEDURE (which retains existing privileges), or DROP PROCEDURE followed CREATE PROCEDURE .

    Permissions in Stored Procedures

    See the article Stored Routine Privileges.

    This page is licensed: CC BY-SA / Gnu FDL

    CALL
    KEY takes an optional list of column_names, and the hashing function is given by the server.

    Just like HASH partitioning, in KEY partitioning the server takes care of the partition and ensures an even distribution among the partitions. However, the largest difference is that KEY partitioning makes use of column_names, and cannot accept a partitioning_expression which is based on column_names, in contrast to HASH partitioning, which can.

    If no column_names are specified, the table's primary key is used if present, or not null unique key if no primary key is present. If neither of these keys are present, not specifying any column_names will result in an error:

    Unlike other partitioning types, columns used for partitioning by KEY are not limited to integer or NULL values.

    KEY partitions do not support column index prefixes. Any columns in the partitioning key that make use of column prefixes are not used.

    Examples

    The unique key must be NOT NULL:

    KEY requires column_values if no primary key or not null unique key is present:

    Primary key columns with index prefixes are silently ignored, so the following two queries are equivalent:

    a(5) and c(5) are silently ignored in the former.

    If all columns use index prefixes, the statement fails with a slightly misleading error:

    This page is licensed: CC BY-SA / Gnu FDL

    partitioning by hash
    mariadb-dump db_name > backup-file.sql
    mariadb db_name < backup-file.sql
    mariadb-hotcopy db_name [/path/to/new_directory]
    mariadb-hotcopy db_name_1 ... db_name_n /path/to/new_directory
    PARTITION BY RANGE COLUMNS (col1, col2, ...)
    (
    	PARTITION partition_name VALUES LESS THAN (value1, value2, ...),
    	[ PARTITION partition_name VALUES LESS THAN (value1, value2, ...), ... ]
    )
    PARTITION BY LIST COLUMNS (partitioning_expression)
    (
    	PARTITION partition_name VALUES IN (value1, value2, ...),
    	[ PARTITION partition_name VALUES IN (value1, value2, ...), ... ]
            [ PARTITION partititon_name DEFAULT ]
    )
    CREATE OR REPLACE TABLE t1 (
      date1 DATE NOT NULL,
      date2 DATE NOT NULL
    )
      ENGINE = InnoDB
      PARTITION BY RANGE COLUMNS (date1,date2) (
        PARTITION p0 VALUES LESS THAN ('2013-01-01', '1994-12-01'),
        PARTITION p1 VALUES LESS THAN ('2014-01-01', '1995-12-01'),
        PARTITION p2 VALUES LESS THAN ('2015-01-01', '1996-12-01')
    );
    CREATE OR REPLACE TABLE t1 (
      num TINYINT(1) NOT NULL
    )
      ENGINE = InnoDB
      PARTITION BY LIST COLUMNS (num) (
        PARTITION p0 VALUES IN (0,1),
        PARTITION p1 VALUES IN (2,3),
        PARTITION p2 DEFAULT
      );
    DROP FUNCTION hello;
    Query OK, 0 rows affected (0.042 sec)
    
    DROP FUNCTION hello;
    ERROR 1305 (42000): FUNCTION test.hello does not exist
    
    DROP FUNCTION IF EXISTS hello;
    Query OK, 0 rows affected, 1 warning (0.000 sec)
    
    SHOW WARNINGS;
    +-------+------+------------------------------------+
    | Level | Code | Message                            |
    +-------+------+------------------------------------+
    | Note  | 1305 | FUNCTION test.hello does not exist |
    +-------+------+------------------------------------+
    ALTER PROCEDURE simpleproc SQL SECURITY INVOKER;
    DELIMITER //
     
    CREATE FUNCTION trust_me(x INT)
    RETURNS INT
    DETERMINISTIC
    READS SQL DATA
    BEGIN
       RETURN (x);
    END //
     
    DELIMITER ;
    DELIMITER //
    
    CREATE FUNCTION dont_trust_me()
    RETURNS INT
    BEGIN
       RETURN UUID_SHORT();
    END //
    
    DELIMITER ;
    INSERT table1
    VALUES('text1','text2','text3');
    INSERT INTO table1
    (col3, col1)
    VALUES('text3','text1');
    INSERT IGNORE
    INTO table2
    VALUES('id1','text','text'),
    ('id2','text','text'),
    ('id2','text','text');
    INSERT LOW_PRIORITY
    INTO table1
    VALUES('text1','text2','text3');
    INSERT DELAYED
    INTO table1
    VALUES('text1','text2','text3');
    INSERT INTO softball_team 
    (last, first, telephone) 
    SELECT name_last, name_first, tel_home
    FROM company.employees 
    WHERE softball='Y';
    REPLACE LOW_PRIORITY
    INTO table2 (id, col1, col2)
    VALUES('id1','text','text'),
    ('id2','text','text'),
    ('id3','text','text');
    UPDATE LOW_PRIORITY table3
    SET col1 = 'text-a', col2='text-b'
    WHERE id < 100;
    UPDATE table5
    SET col_date = DATE_ADD(col_date, INTERVAL 1 DAY)
    WHERE DAYOFWEEK(col_date) = 1;
    UPDATE LOW_PRIORITY table3
    SET col1='text-a', col2='text-b'
    WHERE id < 100
    ORDER BY col3 DESC
    LIMIT 10;
    UPDATE table3, table4
    SET table3.col1 = table4.col1
    WHERE table3.id = table4.id;
    INSERT IGNORE INTO table1 
    (id, col1, col2, status) 
    VALUES('1012','text','text','new'),
    ('1025,'text','text','new'),
    ('1030,'text','text','new')
    ON DUPLICATE KEY 
    UPDATE status = 'old';
    DELIMITER //
    
    CREATE PROCEDURE Reset_animal_count() 
     MODIFIES SQL DATA
     UPDATE animal_count SET animals = 0;
    //
    
    DELIMITER ;
    SELECT * FROM animal_count;
    +---------+
    | animals |
    +---------+
    |     101 |
    +---------+
    
    CALL Reset_animal_count();
    
    SELECT * FROM animal_count;
    +---------+
    | animals |
    +---------+
    |       0 |
    +---------+
    CREATE PROCEDURE
      Withdraw                             /* Routine name */
      (parameter_amount DECIMAL(6,2),     /* Parameter list */
      parameter_teller_id INTEGER,
      parameter_customer_id INTEGER)
      MODIFIES SQL DATA                   /* Data access clause */
      BEGIN                        /* Routine body */
        UPDATE Customers
            SET balance = balance - parameter_amount
            WHERE customer_id = parameter_customer_id;
        UPDATE Tellers
            SET cash_on_hand = cash_on_hand + parameter_amount
            WHERE teller_id = parameter_teller_id;
        INSERT INTO Transactions VALUES (
            parameter_customer_id,
            parameter_teller_id,
            parameter_amount);
      END;
    SHOW PROCEDURE STATUS\G
    *************************** 1. row ***************************
                      Db: test
                    Name: Reset_animal_count
                    Type: PROCEDURE
                 Definer: root@localhost
                Modified: 2013-06-03 08:55:03
                 Created: 2013-06-03 08:55:03
           Security_type: DEFINER
                 Comment: 
    character_set_client: utf8
    collation_connection: utf8_general_ci
      Database Collation: latin1_swedish_ci
    SELECT ROUTINE_NAME FROM INFORMATION_SCHEMA.ROUTINES 
      WHERE ROUTINE_TYPE='PROCEDURE';
    +--------------------+
    | ROUTINE_NAME       |
    +--------------------+
    | Reset_animal_count |
    +--------------------+
    SHOW CREATE PROCEDURE Reset_animal_count\G
    *************************** 1. row ***************************
               Procedure: Reset_animal_count
                sql_mode: 
        Create Procedure: CREATE DEFINER=`root`@`localhost` PROCEDURE `Reset_animal_count`()
        MODIFIES SQL DATA
    UPDATE animal_count SET animals = 0
    character_set_client: utf8
    collation_connection: utf8_general_ci
      Database Collation: latin1_swedish_ci
    DROP PROCEDURE Reset_animal_count();
    PARTITION BY KEY ([column_names])
    [PARTITIONS (number_of_partitions)]
     ERROR 1488 (HY000): Field in list of fields for partition function not found in table
    CREATE OR REPLACE TABLE t1 (v1 INT)
      PARTITION BY KEY (v1)
      PARTITIONS 2;
    CREATE OR REPLACE TABLE t1 (v1 INT, v2 INT)
      PARTITION BY KEY (v1,v2)
      PARTITIONS 2;
    CREATE OR REPLACE TABLE t1 (
        id INT NOT NULL PRIMARY KEY,
        name VARCHAR(5)
    )
    PARTITION BY KEY()
    PARTITIONS 2;
    CREATE OR REPLACE TABLE t1 (
        id INT NOT NULL UNIQUE KEY,
        name VARCHAR(5)
    )
    PARTITION BY KEY()
    PARTITIONS 2;
    CREATE OR REPLACE TABLE t1 (
        id INT NULL UNIQUE KEY,
        name VARCHAR(5)
    )
    PARTITION BY KEY()
    PARTITIONS 2;
    ERROR 1488 (HY000): Field in list of fields for partition function not found in table
    CREATE OR REPLACE TABLE t1 (
        id INT NULL UNIQUE KEY,
        name VARCHAR(5)
    )
    PARTITION BY KEY()
    PARTITIONS 2;
    ERROR 1488 (HY000): Field in list of fields for partition function not found in table
    CREATE OR REPLACE TABLE t1 (
        id INT NULL UNIQUE KEY,
        name VARCHAR(5)
    )
    PARTITION BY KEY(name)
    PARTITIONS 2;
    CREATE OR REPLACE TABLE t1 (
        a VARCHAR(10),
        b VARCHAR(10),
        c VARCHAR(10),
        PRIMARY KEY (a(5), b, c(5))
    ) PARTITION BY KEY() PARTITIONS 2;
    
    CREATE OR REPLACE TABLE t1 (
        a VARCHAR(10),
        b VARCHAR(10),
        c VARCHAR(10),
        PRIMARY KEY (b)
    ) PARTITION BY KEY() PARTITIONS 2;
    CREATE OR REPLACE TABLE t1 (
        a VARCHAR(10),
        b VARCHAR(10),
        c VARCHAR(10),
        PRIMARY KEY (a(5), b(5), c(5))
    ) PARTITION BY KEY() PARTITIONS 2;
    ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table's partitioning function

    To tell it which tables to back up, you can use the --tables option.

  • To tell it which tables to exclude from the backup, you can provide the --tables-exclude option.

  • To tell it to check a file for specific tables to backup, you can provide the --tables-file option.

  • MDEV-17132
    MDEV-13466
    Importing Transportable Tablespaces for Partitioned Tables

    mariadb-backup was previously called mariabackup.

    For a complete list of mariadb-backup options, .

    For a detailed description of mariadb-backup functionality, .

    NoSQL new data type MONGO accessing MongoDB collections as relational tables.

  • Read/Write access to external files of most commonly used formats.

  • Direct access to most external data sources via ODBC, JDBC and MySQL or MongoDB API.

  • Only used columns are retrieved from external scan.

  • Push-down WHERE clauses when appropriate.

  • Support of special and virtual columns.

  • Parallel execution of multi-table tables (currently unavailable).

  • Supports partitioning by sub-files or by sub-tables (enabling table sharding).

  • Support of MRR for SELECT, UPDATE and DELETE.

  • Provides remote, block, dynamic and virtual indexing.

  • Can execute complex queries on remote servers.

  • Provides an API that allows writing additional FDW in C++.

  • 1
    Connect 1.06.0010
    JSON
    XML
    ↑
    .
  • The user name is either your Unix login name, or ODBC on Windows.

  • No password is sent.

  • The client will connect to the server with the default socket, but not any particular database on the server.

  • These defaults can be overridden by specifying a particular parameter to use. For example:

    In this case:

    • -h specifies a host. Instead of using localhost, the IP 166.78.144.191 is used.

    • -u specifies a user name, in this case username

    • -p specifies a password, password. Note that for passwords, unlike the other parameters, there cannot be a space between the option (-p) and the value (password). It is also not secure to use a password in this way, as other users on the system can see it as part of the command that has been run. If you include the -p option, but leave out the password, you are prompted for it, which is more secure.

    • The database name is provided as the first argument after all the options, in this case database_name.

    • It will connect with the default tcp_ip port, 3306

    Connection Parameters

    host

    Connect to the MariaDB server on the given host. The default host is localhost. By default, MariaDB does not permit remote logins - see Configuring MariaDB for Remote Client Access.

    password

    The password of the MariaDB account. It is generally not secure to enter the password on the command line, as other users on the system can see it as part of the command that has been run. If you include the -p or --password option, but leave out the password, you are prompted for it, which is more secure.

    pipe

    On Windows systems that have been started with the --enable-named-pipe option, use this option to connect to the server using a named pipe.

    port

    The TCP/IP port number to use for the connection. The default is 3306.

    protocol

    Specifies the protocol to be used for the connection for the connection. It can be one of TCP, SOCKET, PIPE or MEMORY (case-insensitive). Usually you would not want to change this from the default. For example on Unix, a Unix socket file (SOCKET) is the default protocol, and usually results in the quickest connection.

    • TCP: A TCP/IP connection to a server (either local or remote). Available on all operating systems.

    • SOCKET: A Unix socket file connection, available to the local server on Unix systems only. If socket is not specified with --socket, in a config file or with the environment variable MYSQL_UNIX_PORT then the default /tmp/mysql.sock are used.

    • PIPE. A named-pipe connection (either local or remote). Available on Windows only.

    • MEMORY. Shared-memory connection to the local server on Windows systems only.

    shared-memory-base-name

    Only available on Windows systems in which the server has been started with the --shared-memory option, this specifies the shared-memory name to use for connecting to a local server. The value is case-sensitive, and defaults to MARIADB.

    socket

    For connections to localhost, this specifies either the Unix socket file to use (default /tmp/mysql.sock), or, on Windows where the server has been started with the --enable-named-pipe option, the name (case-insensitive) of the named pipe to use (default MARIADB).

    TLS Options

    A brief listing is provided below. See Secure Connections Overview and TLS System Variables for more detail.

    ssl

    Enable TLS for connection (automatically enabled with other TLS flags). Disable with '--skip-ssl'

    ssl-ca

    CA file in PEM format (check OpenSSL docs, implies --ssl).

    ssl-capath

    CA directory (check OpenSSL docs, implies --ssl).

    ssl-cert

    X509 cert in PEM format (implies --ssl).

    ssl-cipher

    TLS cipher to use (implies --ssl).

    ssl-key

    X509 key in PEM format (implies --ssl).

    ssl-crl

    Certificate revocation list (implies --ssl).

    ssl-crlpath

    Certificate revocation list path (implies --ssl).

    ssl-verify-server-cert

    Verify server's "Common Name" in its cert against hostname used when connecting. This option is disabled by default.

    user

    The MariaDB user name to use when connecting to the server. The default is either your Unix login name, or ODBC on Windows. See the GRANT command for details on creating MariaDB user accounts.

    Option Files

    It's also possible to use option files (or configuration files) to set these options. Most clients read option files. Usually, starting a client with the --help option will display which files it looks for as well as which option groups it recognizes.

    See Also

    • A MariaDB Primer

    • mariadb client

    • Clients and Utilities

    • Configuring MariaDB for Remote Client Access

    • allows you to start MariaDB without GRANT. This is useful if you lost your root password.

    This page is licensed: CC BY-SA / Gnu FDL

    A MariaDB Primer
    mariadb client
    graphical client
    mariadb-dump
    Using Whitespace

    A query hard to read is a query hard to debug. White space is free. New lines and indentation make queries easy to read, particularly when constructing a query inside a scripting language, where variables are interspersed throughout the query.

    There is a syntax error in the following. How fast can you find it?

    Here's the same query, with correct use of whitespace. Can you find the error faster?

    Even if you don't know SQL, you might still have caught the missing ')' following team.teamId.

    The exact formatting style you use isn't so important. You might like commas in the select list to follow expressions, rather than precede them. You might indent with tabs or with spaces. Adherence to some particular form is not important. Legibility is the only goal.

    Table and Field Aliases

    Aliases allow you to rename tables and fields for use within a query. This can be handy when the original names are very long, and is required for self joins and certain subqueries. However, poorly chosen aliases can make a query harder to debug, rather than easier. Aliases should reflect the original table name, not an arbitrary string.

    Bad:

    As the list of joined tables and the WHERE clause grow, it becomes necessary to repeatedly look back to the top of the query to see to which table any given alias refers.

    Better:

    Each alias is just a little longer, but the table initials give enough clues that anyone familiar with the database only need see the full table name once, and can generally remember which table goes with which alias while reading the rest of the query.

    Placing JOIN conditions

    The manual warns against using the JOIN condition (that is, the ON clause) for restricting rows. Some queries, particularly those using implicit joins, take the opposite extreme - all join conditions are moved to the WHERE clause. In consequence, the table relationships are mixed with the business logic.

    Bad:

    Without digging through the WHERE clause, it is impossible to say what links the two tables.

    Better:

    The relation between the tables is immediately obvious. The WHERE clause is left to limit rows in the result set.

    Compliance with such a restriction negates the use of the comma operator to join tables. It is a small price to pay. Queries should be written using the explicit JOIN keyword anyway, and the two should never be mixed (unless you like rewriting all your queries every time a new version changes operator precedence).

    Finding Syntax Errors

    Syntax errors are among the easiest problems to solve. MariaDB provides an error message showing the exact point where the parser became confused. Check the query, including a few words before the phrase shown in the error message. Most syntax and parsing errors are obvious after a second look, but some are more elusive, especially when the error text seems empty, points to a valid keyword, or seems to error on syntax that appears exactly correct.

    Interpreting the Empty Error

    Most syntax errors are easy to interpret. The error generally details the exact source of the trouble. A careful look at the query, with the error message in mind, often reveals an obvious mistake, such as misspelled field names, a missing 'AND', or an extra closing parenthesis. Sometimes the error is a little less helpful. A frequent, less-than-helpful message:

    The empty ' ' can be disheartening. Clearly there is an error, but where? A good place to look is at the end of the query. The ' ' suggests that the parser reached the end of the statement while still expecting some syntax token to appear.

    Check for missing closers, such as ' and ):

    Look for incomplete clauses, often indicated by an exposed comma:

    Checking for keywords

    MariaDB allows table and field names and aliases that are also reserved words. To prevent ambiguity, such names must be enclosed in backticks (`):

    If the syntax error is shown near one of your identifiers, check if it appears on the reserved word list.

    A text editor with color highlighting for SQL syntax helps to find these errors. When you enter a field name, and it shows up in the same color as the SELECT keyword, you know something is amiss. Some common culprits:

    • DESC is a common abbreviation for "description" fields. It means "descending" in a MariaDB ORDER clause.

    • DATE, TIME, and TIMESTAMP are all common field names. They are also field types.

    • ORDER appears in sales applications. MariaDB uses it to specify sorting for results.

    Some keywords are so common that MariaDB makes a special allowance to use them unquoted. My advice: don't. If it's a keyword, quote it.

    Version specific syntax

    As MariaDB adds new features, the syntax must change to support them. Most of the time, old syntax will work in newer versions of MariaDB. One notable exception is the change in precedence of the comma operator relative to the JOIN keyword in version 5.0. A query that used to work, such as

    will now fail.

    More common, however, is an attempt to use new syntax in an old version. Web hosting companies are notoriously slow to upgrade MariaDB, and you may find yourself using a version several years out of date. The result can be very frustrating when a query that executes flawlessly on your own workstation, running a recent installation, fails completely in your production environment.

    This query fails in any version of MySQL prior to 4.1, when subqueries were added to the server:

    This query fails in some early versions of MySQL, because the JOIN syntax did not originally allow an ON clause:

    Always check the installed version of MariaDB, and read the section of the manual relevant for that version. The manual usually indicates exactly when particular syntax became available for use.

    The initial version of this article was copied, with permission, from Basic_Debugging on 2012-10-05.

    This page is licensed: CC BY-SA / Gnu FDL

    0.0.0.0
    to listen on all interfaces, or the address of a specific public interface.

    Multiple comma-separated addresses can be given to bind_address to allow the server to listen on more than one specific interface while not listening on others.

    If bind-address is bound to 127.0.0.1 (localhost), one can't connect to the MariaDB server from other hosts or from the same host over TCP/IP on a different interface than the loopback (127.0.0.1). This for example will not work (connecting with a hostname that points to a local IP of the host):

    (/my/maria-10.11) ./client/mariadb --host=myhost --protocol=tcp --port=3306 test
    ERROR 2002 (HY000): Can't connect to MySQL server on 'myhost' (115)
    (/my/maria-10.11) telnet myhost 3306
    Trying 192.168.0.11...
    telnet: connect to address 192.168.0.11: Connection refused

    Using 'localhost' works when binding with bind_address:

    (my/maria-10.11) ./client/mariadb --host=localhost --protocol=tcp --port=3306 test
    Reading table information for completion of table and column names
    You can turn
    

    Multiple comma-separated addresses cannot be given to bind_address . Use a single address.

    Finding the Defaults File

    To enable MariaDB to listen to remote connections, you need to edit your defaults file. See Configuring MariaDB with my.cnf for more detail.

    Common locations for defaults files:

    You can see which defaults files are read and in which order by executing:

    The last line shows which defaults files are read.

    Editing the Defaults File

    Once you have located the defaults file, use a text editor to open the file and try to find lines like this under the [mysqld] section:

    The lines may not be in this particular order, but the order doesn't matter.

    If you are able to locate these lines, make sure they are both commented out (prefaced with hash (#) characters), so that they look like this:

    Again, the order of these lines don't matter.

    Alternatively, just add the following lines at the end of your .my.cnf (notice that the file name starts with a dot) file in your home directory or alternative last in your /etc/my.cnf file.

    This works as one can have any number of [mysqld] sections.

    Save the file and restart the mariadbd daemon or service (see Starting and Stopping MariaDB).

    You can check the options mariadbd is using by executing:

    It doesn't matter if you have the original --bind-address left as the later --skip-bind-address will overwrite it.

    Granting User Connections From Remote Hosts

    Now that your MariaDB server installation is setup to accept connections from remote hosts, we have to add a user that is allowed to connect from something other than 'localhost' (Users in MariaDB are defined as 'user'@'host', so 'chadmaynard'@'localhost' and 'chadmaynard'@'1.1.1.1' (or 'chadmaynard'@'server.domain.local') are different users that can have different permissions and/or passwords.

    To create a new user:

    • Log into the mariadb command line client (or your favorite graphical client if you wish):

    • if you are interested in viewing any existing remote users, issue the following SQL statement on the mysql.user table:

    (If you have a fresh install, it is normal for no rows to be returned)

    Now you have some decisions to make. At the heart of every grant statement you have these things:

    • list of allowed privileges

    • what database/tables these privileges apply to

    • username

    • host this user can connect from

    • and optionally a password

    It is common for people to want to create a "root" user that can connect from anywhere, so as an example, we'll do just that, but to improve on it we'll create a root user that can connect from anywhere on my local area network (LAN), which has addresses in the subnet 192.168.100.0/24. This is an improvement because opening a MariaDB server up to the Internet and granting access to all hosts is bad practice.

    % is a wildcard.

    For more information about how to use GRANT, please see the GRANT page.

    At this point, we have accomplished our goal and we have a user 'root' that can connect from anywhere on the 192.168.100.0/24 LAN.

    Port 3306 is Configured in Firewall

    One more point to consider whether the firewall is configured to allow incoming request from remote clients:

    On RHEL and CentOS 7, it may be necessary to configure the firewall to allow TCP access to MariaDB from remote hosts. To do so, execute both of these commands:

    Caveats

    • If your system is running a software firewall (or behind a hardware firewall or NAT) you must allow connections destined to TCP port that MariaDB runs on (by default and almost always 3306).

    • To undo this change and not allow remote access anymore, simply remove the skip-bind-address line or uncomment the bind-address line in your defaults file. The end result should be that you should have in the output from ./sql/mariadbd --print-defaults the option --bind-address=127.0.0.1 and no --skip-bind-address.

    The initial version of this article was copied, with permission, from Remote_Clients_Cannot_Connect on 2012-10-30.

    This page is licensed: CC BY-SA / Gnu FDL

    bind-address
    skip-networking
    skip-networking
    bind-address
    Initialization Phase
    • Connect to mysqld instance, find out important variables (datadir, InnoDB pagesize, encryption keys, encryption plugin etc)

    • Scan the database directory, datadir, looking for InnoDB tablespaces, load the tablespaces (basically, it is an “open” in InnoDB sense)

    • If --lock-ddl-per-table is used:

      • Do MDL locks, for InnoDB tablespaces that we want to copy. This is to ensure that there are no ALTER, RENAME , TRUNCATE or DROP TABLE on any of the tables that we want to copy.

      • This is implemented with:

    • If lock-ddl-per-table is not done, then mariadb-backup would have to know all tables that were created or altered during the backup. See MDEV-16791.

    Redo Log Handling

    Start a dedicated thread in mariadb-backup to copy InnoDB redo log (ib_logfile*).

    • This is needed to record all changes done while the backup is running. (The redo log logically is a single circular file, split into innodb_log_files_in_group files.)

    • The log is also used to see detect if any truncate or online alter tables are used.

    • The assumption is that the copy thread are able to keep up with server. It should always be able keep up, if the redo log is big enough.

    Copy-phase for InnoDB Tablespaces

    • Copy all selected tablespaces, file by file, in dedicated threads in mariadb-backup without involving the mysqld server.

    • This is special “careful” copy, it looks for page-level consistency by checking the checksum.

    • The files are not point-in-time consistent as data may change during copy.

    • The idea is that InnoDB recovery would make it point-in-time consistent.

    Create a Consistent Backup Point

    • Execute FLUSH TABLE WITH READ LOCK. This is default, but may be omitted with the -–no-lock parameter. The reason why FLUSH is needed is to ensure that all tables are in a consistent state at the exact same point in time, independent of storage engine.

    • If --lock-ddl-per-table is used and there is a user query waiting for MDL, the user query are killed to resolve a deadlock. Note that these are only queries of type ALTER, DROP, TRUNCATE or RENAME TABLE. ()

    Last Copy Phase

    • Copy .frm, MyISAM, Aria and other storage engine files.

    • If MyRocks is used, create rocksdb checkpoint via the set rocksdb_create_checkpoint=$rocksdb_data_dir/mariadb-backup_rocksdb_checkpoint command. The result of it is a directory with hardlinks to MyRocks files. Copy the checkpoint directory to the backup (or create hardlinks in backup directory is on the same partition as data directory). Remove the checkpoint directory.

    • Copy tables that were created while the backup was running and do rename files that were changed during backup (since ).

    • Copy the rest of InnoDB redo log, stop redo-log-copy thread.

    • Write some metadata info (binlog position).

    Release Locks

    • If FLUSH TABLE WITH READ LOCK was done:

      • execute: UNLOCK TABLES

    • If --lock-ddl-per-table was done:

      • execute COMMIT

    Handle Log Tables (TODO)

    • If log tables exists:

      • Take MDL lock for log tables

      • Copy part of log tables that wasn't copied before

      • Unlock log tables

    Notes

    • If FLUSH TABLE WITH READ LOCK is not used, only InnoDB tables are consistent (not the privilege tables in the mysql database or the binary log). The backup point depends on the content of the redo log within the backup itself.

    This page is licensed: CC BY-SA / Gnu FDL

    mariadb-backup was previously called mariabackup.

    REPLACE
    TRUNCATE
    Using CONNECT - Partitioning and Sharding
    foreign keys
    query cache
    binlog_format=ROW
    GEOMETRY types
    INFORMATION_SCHEMA.PARTITIONS
    Partition Maintenance

    Basic Queries

    This guide covers the fundamentals of creating database structures, inserting data, and retrieving information using the default MariaDB client.

    Connecting to MariaDB

    MariaDB is a database system, a database server. To interface with the MariaDB server, you can use a client program, or you can write a program or script with one of the popular programming languages (e.g., PHP) using an API (Application Programming Interface) to interface with the MariaDB server. For the purposes of this article, we will focus on using the default client that comes with MariaDB called mariadb. With this client, you can either enter queries from the command-line, or you can switch to a terminal, that is to say, monitor mode. To start, we'll use the latter.

    From the Linux command-line, you would enter the following to log in as the root user and to enter monitor mode:

    The -u option is for specifying the user name. You would replace root here if you want to use a different user name. This is the MariaDB user name, not the Linux user name. The password for the MariaDB user root will probably be different from the Linux user root. Incidentally, it's not a good security practice to use the root user unless you have a specific administrative task to perform for which only root has the needed privileges.

    The -p option above instructs the mariadb client to prompt you for the password. If the password for the root user hasn't been set yet, then the password is blank and you would just hit [Enter] when prompted. The -h option is for specifying the host name or the IP address of the server. This would be necessary if the client is running on a different machine than the server. If you've secure-shelled into the server machine, you probably won't need to use the host option. In fact, if you're logged into Linux as root, you won't need the user option—the -p is all you'll need. Once you've entered the line above along with the password when prompted, you are logged into MariaDB through the client. To exit, type quit or exit and press [Enter].

    Creating a Structure

    In order to be able to add and to manipulate data, you first have to create a database structure. Creating a database is simple. You would enter something like the following from within the :

    This very minimal, first SQL statement will create a sub-directory called bookstore on the Linux filesystem in the directory which holds your MariaDB data files. It won't create any data, obviously. It'll just set up a place to add tables, which will in turn hold data. The second SQL statement above will set this new database as the default database. It will remain your default until you change it to a different one or until you log out of MariaDB.

    The next step is to begin creating tables. This is only a little more complicated. To create a simple table that will hold basic data on books, we could enter something like the following:

    This SQL statement creates the table books with six fields, or rather columns. The first column (isbn) is an identification number for each row—this name relates to the unique identifier used in the book publishing business. It has a fixed-width character type of 20 characters. It are the primary key column on which data are indexed. The column data type for the book title is a variable width character column of fifty characters at most. The third and fourth columns are used for identification numbers for the author and the publisher. They are integer data types. The fifth column is used for the publication year of each book. The last column is for entering a description of each book. It's a data type, which means that it's a variable width column and it can hold up to 65535 bytes of data for each row. There are several other data types that may be used for columns, but this gives you a good sampling.

    To see how the table we created looks, enter the following SQL statement:

    To change the settings of a table, you can use the statement. I'll cover that statement in another article. To delete a table completely (including its data), you can use the statement, followed by the table name. Be careful with this statement since it's not reversible.

    The next table we'll create for our examples is the authors table to hold author information. This table will save us from having to enter the author's name and other related data for each book written by each author. It also helps to ensure consistency of data: there's less chance of inadvertent spelling deviations.

    We'll join this table to the books table as needed. For instance, we would use it when we want a list of books along with their corresponding authors' names. For a real bookstore's database, both of these tables would probably have more columns. There would also be several more tables. For the examples that follow, these two tables as they are are enough.

    Minor Items

    Before moving on to the next step of adding data to the tables, let me point out a few minor items that I've omitted mentioning. SQL statements end with a semi-colon (or a \G). You can spread an SQL statement over multiple lines. However, it won't be passed to the server by the client until you terminate it with a semi-colon and hit [Enter]. To cancel an SQL statement once you've started typing it, enter \c and press [Enter].

    As a basic convention, reserved words are printed in all capital letters. This isn't necessary, though. MariaDB is case-insensitive with regards to reserved words. Database and table names, however, are case-sensitive on Linux. This is because they reference the related directories and files on the filesystem. Column names aren't case sensitive since they're not affected by the filesystem, per se. As another convention, we use lower-case letters for structural names (e.g., table names). It's a matter of preference for deciding on names.

    Entering Data

    The primary method for entering data into a table is to use the statement. As an example, let's enter some information about an author into the authors table. We'll do that like so:

    This will add the name and country of the author Franz Kafka to the authors table. We don't need to give a value for the author_id since that column was created with the option. MariaDB will automatically assign an identification number. You can manually assign one, especially if you want to start the count at a higher number than 1 (e.g., 1000). Since we are not providing data for all of the columns in the table, we have to list the columns for which we are giving data and in the order that the data is given in the set following the VALUES keyword. This means that we could give the data in a different order.

    For an actual database, we would probably enter data for many authors. We'll assume that we've done that and move on to entering data for some books. Below is an entry for one of Kafka's books:

    This adds a record for Kafka's book, The Castle. Notice that we mixed up the order of the columns, but it still works because both sets agree. We indicate that the author is Kafka by giving a value of 1 for the author_id. This is the value that was assigned by MariaDB when we entered the row for Kafka earlier. Let's enter a few more books for Kafka, but by a different method:

    In this example, we've added three books in one statement. This allows us to give the list of column names once. We also give the keyword VALUES only once, followed by a separate set of values for each book, each contained in parentheses and separated by commas. This cuts down on typing and speeds up the process. Either method is fine and both have their advantages. To be able to continue with our examples, let's assume that data on thousands of books has been entered. With that behind us, let's look at how to retrieve data from tables.

    Retrieving Data

    The primary method of retrieving data from tables is to use a statement. There are many options available with the statement, but you can start simply. As an example, let's retrieve a list of book titles from the books table:

    This will display all of the rows of books in the table. If the table has thousands of rows, MariaDB will display thousands. To limit the number of rows retrieved, we could add a clause to the statement like so:

    This will limit the number of rows displayed to five. To be able to list the author's name for each book along with the title, you will have to join the books table with the authors table. To do this, we can use the clause like so:

    Notice that the primary table from which we're drawing data is given in the FROM clause. The table to which we're joining is given in the clause along with the commonly named column (i.e., author_id) that we're using for the join.

    To retrieve the titles of only books written by Kafka based on his name (not the author_id), we would use the WHERE clause with the statement. This would be entered like the following:

    This statement will list the titles of Kafka books stored in the database. Notice that I've added the AS parameter next to the column name title to change the column heading in the results set to Kafka Books. This is known as an alias. Looking at the results here, we can see that the title for one of Kafka's books is incorrect. His book Amerika is spelled above with a "c" in the table instead of a "k". This leads to the next section on changing data.

    Changing & Deleting Data

    In order to change existing data, a common method is to use the statement. When changing data, though, we need to be sure that we change the correct rows. In our example, there could be another book with the title America written by a different author. Since the key column isbn has only unique numbers and we know the ISBN number for the book that we want to change, we can use it to specify the row.

    This will change the value of the title column for the row specified. We could change the value of other columns for the same row by giving the column = value for each, separated by commas.

    If we want to delete a row of data, we can use the statement. For instance, suppose that our fictitious bookstore has decided no longer to carry books by John Grisham. By first running a statement, we determine the identification number for the author to be 2034. Using this author identification number, we could enter the following:

    This statement will delete all rows from the table books for the author_id given. To do a clean job of it, we'll have to do the same for the authors table. We would just replace the table name in the statement above; everything else would be the same.

    Conclusion

    This is a very basic primer for using MariaDB. Hopefully, it gives you the idea of how to get started with MariaDB. Each of the SQL statements mentioned here have several more options and clauses each. We will cover these statements and others in greater detail in other articles. For now, though, you can learn more about them from experimenting and by further reading of the documentation online documentation.

    This page is licensed: CC BY-SA / Gnu FDL

    Incremental Backup and Restore with mariadb-backup

    This guide explains how to create and apply incremental backups with mariadb-backup, saving storage space and reducing backup time.

    mariadb-backup was previously called mariabackup.

    When using mariadb-backup, you have the option of performing a full or incremental backup. Full backups create a complete copy in an empty directory while incremental backups update a previous backup with new data. This page documents incremental backups.

    InnoDB pages contain log sequence numbers, or LSN's. Whenever you modify a row on any InnoDB table on the database, the storage engine increments this number. When performing an incremental backup, mariadb-backup checks the most recent LSN for the backup against the LSN's contained in the database. It then updates any of the backup files that have fallen behind.

    For a complete list of mariadb-backup options, .

    For a detailed description of mariadb-backup functionality, .

    Backing up the Database Server

    In order to take an incremental backup, you first need to take a full backup. In order to back up the database, you need to run mariadb-backup with the --backup option to tell it to perform a backup and with the --target-dir option to tell it where to place the backup files. When taking a full backup, the target directory must be empty or it must not exist.

    To take a backup, run the following command:

    This backs up all databases into the target directory /var/mariadb/backup. If you look in that directory at the xtrabackup_checkpoints file, you can see the LSN data provided by InnoDB.

    For example:

    Backing up the Incremental Changes

    Once you have created a full backup on your system, you can also back up the incremental changes as often as you would like.

    In order to perform an incremental backup, you need to run mariadb-backup with the --backup option to tell it to perform a backup and with the --target-dir option to tell it where to place the incremental changes. The target directory must be empty. You also need to run it with the --incremental-basedir option to tell it the path to the full backup taken above. For example:

    This command creates a series of delta files that store the incremental changes in /var/mariadb/inc1. You can find a similar xtrabackup_checkpoints file in this directory, with the updated LSN values.

    For example:

    To perform additional incremental backups, you can then use the target directory of the previous incremental backup as the incremental base directory of the next incremental backup. For example:

    Using the Backup History Table

    Alternatively, you can use the backup history table to manage your backup chain. This allows you to reference the previous backup by a logical name instead of a directory path.

    1. Create the Base Backup: Take a full backup using the --history option.

    2. Create the Incremental Backup: Use --incremental-history-name to specify the base backup's name. It is recommended to use --history again to record this new incremental backup.

    Privileges

    Requires SELECT on the history table to find the base backup, and INSERT to record the new one.

    You can also use --incremental-history-uuid if you prefer to reference the unique ID generated by mariadb-backup.

    Combining with --stream output

    When using --stream, for instance for compression or encryption using external tools, the xtrabackup_checkpoints file containing the information where to continue from on the next incremental backup will also be part of the compressed/encrypted backup file, and so not directly accessible by default.

    A directory containing an extra copy of the file can be created using the --extra-lsndir=... option though, and this directory can then be passed to the next incremental backup --incremental-basedir=..., for example:

    Preparing the Backup

    Following the above steps, you have three backups in /var/mariadb: The first is a full backup, the others are increments on this first backup. In order to restore a backup to the database, you first need to apply the incremental backups to the base full backup. This is done using the --prepare command option.

    Perform the following process:

    First, prepare the base backup:

    Running this command brings the base full backup, that is, /var/mariadb/backup, into sync with the changes contained in the InnoDB redo log collected while the backup was taken.

    Then, apply the incremental changes to the base full backup:

    Running this command brings the base full backup, that is, /var/mariadb/backup, into sync with the changes contained in the first incremental backup.

    For each remaining incremental backup, repeat the last step to bring the base full backup into sync with the changes contained in that incremental backup.

    Restoring the Backup

    Once you've applied all incremental backups to the base, you can restore the backup using either the --copy-back or the --move-back options. The --copy-back option allows you to keep the original backup files. The --move-back option actually moves the backup files to the datadir, so the original backup files are lost.

    • First, .

    • Then, ensure that the datadir is empty.

    • Then, run mariadb-backup with one of the options mentioned above:

    • Then, you may need to fix the file permissions.

    When mariadb-backup restores a database, it preserves the file and directory privileges of the backup. However, it writes the files to disk as the user and group restoring the database. As such, after restoring a backup, you may need to adjust the owner of the data directory to match the user and group for the MariaDB Server, typically mysql for both. For example, to recursively change ownership of the files to the mysql user and group, you could execute:

    • Finally, .

    This page is licensed: CC BY-SA / Gnu FDL

    Backup and Restore via dbForge Studio

    Learn how to use dbForge Studio, a GUI tool, to perform backup and restore operations for MariaDB databases visually.

    dbForge Studio is a proprietary third-party tool, not included with MariaDB Server. Content contributed by devart.

    In the modern world, data importance is non-negotiable, and keeping data integrity and consistency is the top priority. Data stored in databases is vulnerable to system crashes, hardware problems, security breaches, and other failures causing data loss or corruption. To prevent database damage, it is important to back the data up regularly and implement the data restore policies. MariaDB, one of the most popular database management systems, provides several methods to configure routines for backing up and recovering data. The current guideline illustrates both processes performed with the help of dbForge Studio for MySQL which is also a fully-functional GUI client for MariaDB that has everything you need to accomplish the database-related tasks on MariaDB.

    Create the backup on MariaDB

    dbForge Studio for MySQL and MariaDB has a separate module dedicated to the data backing up and recovering jobs. Let us first look at how set the tool to create a MariaDB backup. Launch the Studio and go to Database > Backup and Restore > Backup Database. The Database Backup Wizard with several pages will appear. On the General page, specify the database in question and how to connect to it, then choose where to save the created backup file, and specify its name. There are additional optional settings – you can select to delete old files automatically, zip the output backup file, etc. When done, click Next.

    On the Backup content page, select the objects to back up. Click Next.

    The Options page. Here you can specify the details of the data backing up process. Plenty of available options allow you to configure this task precisely to meet the specific requirements. When done, click Next.

    The Errors handling page. Here you configure how the Studio should handle the errors that might occur during the backing up process. Also, you can set the Studio to write the information about the errors it encountered into the log file.

    You can save the project settings to apply them in the future. For this, in the left bottom corner of the Wizard, select one of the saving options: Save Project or Save Command Line. The latter allows saving settings as a backup script which you can execute from the command line at any time later.

    The configuration process is complete. Click Backup to launch the data backing up.

    Note: It is not obligatory to go through all the pages of the Wizard. The Backup button is available no matter on which page you are. Thus, you can launch the process of backing the data up whenever you have set everything you needed.

    After you have clicked Backup, dbForge Studio for MySQL starts to create a MariaDB backup.

    When this is done, you will see the confirmation message. Click Finish.

    Backup and restore policies suggest creating regular backups on a daily, weekly, monthly, quarterly, and yearly basis. Besides, to minimize the consequences of possible data loss, it is highly recommended make a backup before making any changes to a database, such as upgrading, modifying data, redesigning the structure, etc. Simply speaking, you always need a fresh backup to restore the most up-to-date database version. To ensure regular backups on schedule, you can use a batch file created with the help of the Studio and Windows Task Scheduler, where you need to create and schedule the backup task.

    Restore the backup file on MariaDB

    This is an even faster task, done in half as many steps.

    The process of data recovery from the backup file is simple. It only takes several clicks: Launch dbForge Studio for MySQL and go to Database > Backup and Restore > Restore Database. The Database Restore Wizard will appear. Specify the database name, its connection parameters, and the path to the backup file you want to restore. Then click Restore, and the process will start immediately.

    When the process is complete, click Finish.

    More information about this essential feature is available on the – it explores the routines performed on MySQL, but they fully apply to MariaDB backups. You can use the same IDE and the same workflow.

    To test-drive this and other features of the Studio (the IDE includes all the tools necessary for the development, management, and administration of databases on MariaDB), . dbForge Studio for MySQL and MariaDB boasts truly advanced functionality that will help your teams deliver more value.

    This page is licensed: CC BY-SA / Gnu FDL

    RANGE Partitioning Type

    The RANGE partitioning type assigns rows to partitions based on whether column values fall within contiguous, non-overlapping ranges.

    The RANGE partitioning type is used to assign each partition a range of values generated by the partitioning expression. Ranges must be ordered, contiguous and non-overlapping. The minimum value is always included in the first range. The highest value may or may not be included in the last range.

    A variant of this partitioning method, RANGE COLUMNS, allows us to use multiple columns and more datatypes.

    Syntax

    The last part of a CREATE TABLE statement can be definition of the new table's partitions. In the case of RANGE partitioning, the syntax is the following:

    PARTITION BY RANGE indicates that the partitioning type is RANGE.

    • partitioning_expression is an SQL expression that returns a value from each row. In the simplest cases, it is a column name. This value is used to determine which partition should contain a row.

    • partition_name is the name of a partition.

    • value indicates the upper bound for that partition. The values must be ascending. For the first partition, the lower limit is NULL

    As a catchall, MAXVALUE can be specified as a value for the last partition. Note, however, that in order to append a new partition, it is not possible to use ; instead, must be used.

    Use Cases

    A typical use case is when we want to partition a table whose rows refer to a moment or period in time; for example commercial transactions, blog posts, or events of some kind. We can partition the table by year, to keep all recent data in one partition and distribute historical data in big partitions that are stored on slower disks. Or, if our queries always read rows which refer to the same month or week, we can partition the table by month or year week (in this case, historical data and recent data are stored together).

    values also represent a chronological order. So, these values can be used to store old data in separate partitions. However, partitioning by id is not the best choice if we usually query a table by date.

    Examples

    Partitioning a log table by year:

    Partitioning the table by both year and month:

    In the last example, the function is used to accomplish the purpose. Also, the first two partitions cover longer periods of time (probably because the logged activities were less intensive).

    In both cases, when our tables become huge and we don't need to store all historical data any more, we can drop the oldest partitions in this way:

    We will still be able to drop a partition that does not contain the oldest data, but all rows stored in it will disappear.

    Example of an error when inserting outside a defined partition range:

    To avoid the error, use the IGNORE keyword:

    An alternative definition with MAXVALUE as a catchall:

    This page is licensed: CC BY-SA / Gnu FDL

    Full Backup and Restore with mariadb-backup

    Learn how to perform and restore full physical backups of MariaDB databases using the mariadb-backup tool, ensuring consistent data recovery.

    mariadb-backup was previously called mariabackup.

    When using mariadb-backup, you have the option of performing a full or an incremental backup. Full backups create a complete backup of the database server in an empty directory while incremental backups update a previous backup with whatever changes to the data have occurred since the backup. This page documents how to perform full backups.

    For a complete list of mariadb-backup options, .

    For a detailed description of mariadb-backup functionality, .

    Backing up the Database Server

    In order to back up the database, you need to run mariadb-backup with the --backup option to tell it to perform a backup and with the --target-dir option to tell it where to place the backup files. When taking a full backup, the target directory must be empty or it must not exist.

    To take a backup, run the following command:

    The time the backup takes depends on the size of the databases or tables you're backing up. You can cancel the backup if you need to, as the backup process does not modify the database.

    mariadb-backup writes the backup files the target directory. If the target directory doesn't exist, it creates it. If the target directory exists and contains files, it raises an error and aborts.

    Here is an example backup directory:

    Using the Backup History Feature

    You can optionally use the --history option to record metadata about your full backup in the database. This creates a centralized log and allows future incremental backups to reference this full backup by name instead of by directory path.

    • Privileges: The backup user requires INSERT, CREATE, and ALTER privileges on the history table (mysql.mariadb_backup_history in MariaDB 10.11+, or PERCONA_SCHEMA.xtrabackup_history in older versions).

    • Failure Case: If the user lacks privileges, the backup will complete the file copy process but will fail at the final step with an INSERT command denied error.

    Preparing the Backup for Restoration

    The data files that mariadb-backup creates in the target directory are not point-in-time consistent, given that the data files are copied at different times during the backup operation. If you try to restore from these files, InnoDB notices the inconsistencies and crashes to protect you from corruption

    Before you can restore from a backup, you first need to prepare it to make the data files consistent. You can do so with the --prepare option.

    Backup Preparation Steps

    1. Run mariadb-backup --backup. You must use a version of mariadb-backup that is compatible with the server version you are planning to upgrade from. For instance, when upgrading from MariaDB 10.4 to 10.5, you must use the 10.4 version of mariadb-backup, Another example: When upgrading from MariaDB 10.6 to 10.11, you must use the 10.6 version of mariadb-backup.

    2. Run mariadb-backup --prepare, again using a compatible version of mariadb-backup, as described in the previous step.

    Restoring the Backup

    Once the backup is complete and you have prepared the backup for restoration (previous step), you can restore the backup using either the --copy-back or the --move-back options. The --copy-back option allows you to keep the original backup files. The --move-back option actually moves the backup files to the datadir, so the original backup files are lost.

    • First, stop the MariaDB Server process.

    • Then, ensure that datadir is empty.

    • Then, run mariadb-backup with one of the options mentioned above:

    • Then, you may need to fix the file permissions.

    When mariadb-backup restores a database, it preserves the file and directory privileges of the backup. However, it writes the files to disk as the user and group restoring the database. As such, after restoring a backup, you may need to adjust the owner of the data directory to match the user and group for the MariaDB Server, typically mysql for both. For example, to recursively change ownership of the files to the mysql user and group, you could execute:

    • Finally, start the MariaDB Server process.

    Restoring with Other Tools

    Once a full backup is prepared, it is a fully functional MariaDB data directory. Therefore, as long as the MariaDB Server process is stopped on the target server, you can technically restore the backup using any file copying tool, such as cp or rsync. For example, you could also execute the following to restore the backup:

    This page is licensed: CC BY-SA / Gnu FDL

    ARCHIVE

    The Archive storage engine is optimized for high-speed insertion and compression of large amounts of data, suitable for logging and auditing.

    The ARCHIVE storage engine is a storage engine that uses gzip to compress rows. It is mainly used for storing large amounts of data, without indexes, with only a very small footprint.

    A table using the ARCHIVE storage engine is stored in two files on disk. There's a table definition file with an extension of .frm, and a data file with the extension .ARZ. At times during optimization, a .ARN file will appear.

    New rows are inserted into a compression buffer and are flushed to disk when needed. SELECTs cause a flush. Sometimes, rows created by multi-row inserts are not visible until the statement is complete.

    ARCHIVE allows a maximum of one key. The key must be on an AUTO_INCREMENT column, and can be a PRIMARY KEY or a non-unique key. However, it has a limitation: it is not possible to insert a value which is lower than the next AUTO_INCREMENT value.

    Installing the Plugin

    Although the plugin's shared library is distributed with MariaDB by default, the plugin is not actually installed by MariaDB by default. There are two methods that can be used to install the plugin with MariaDB.

    The first method can be used to install the plugin without restarting the server. You can install the plugin dynamically by executing or :

    The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the or the options. This can be specified as a command-line argument to or it can be specified in a relevant server in an :

    Uninstalling the Plugin

    You can uninstall the plugin dynamically by executing or :

    If you installed the plugin by providing the or the options in a relevant server in an , then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.

    Characteristics

    • Supports and , but not , or .

    • Data is compressed with zlib as it is inserted, making it very small.

    • Data is slow the select, as it needs to be uncompressed, and, besides the , there is no cache.

    • Supports AUTO_INCREMENT (since MariaDB/MySQL 5.1.6), which can be a unique or a non-unique index.

    DBMS_OUTPUT

    The DBMS_OUTPUT plugin provides Oracle-compatible output buffering functions (like PUT_LINE), allowing stored procedures to send messages to the client.

    This feature is available from MariaDB Enterprise Server 11.8.

    Overview

    Oracle documentation describing DBMS_OUTPUT can be found here: https://docs.oracle.com/en/database/oracle/oracle-database/21/arpls/DBMS_OUTPUT.html

    The main idea of DBMS_OUTPUT is:

    • Messages submitted by DBMS_OUTPUT.PUT_LINE() are not sent to the client until the sending subprogram (or trigger) completes. There is no a way to flush output during the execution of a procedure.

    • Therefore, lines are collected into a server side buffer, which, at the end of the current user statement, can be fetched to the client side using another SQL statement. Then, they can be read using a regular MariaDB Connector-C API. No changes in the client-protocol are needed.

    • Oracle's SQLPlus uses the procedure DBMS_PACKAGE.GET_LINES() to fetch the output to the client side as an array of strings.

    Package Routines

    Package Overview

    MariaDB implements all routines supported by Oracle, except GET_LINES():

    • Procedure ENABLE() - enable the routines.

    • Procedure DISABLE() - disable the routines. If the package is disabled, all calls to subprograms, such as PUT() and PUT_LINE(), are ignored (or exit immediately without doing anything).

    • Procedure PUT_LINE()

    The package starts in disabled mode, so an explicit enabling is needed:

    Details

    If a call for GET_LINE or GET_LINES did not retrieve all lines, then a subsequent call for PUT, PUT_LINE, or NEW_LINE discards the remaining lines (to avoid confusing with the next message). This script demonstrates the principle:

    LINE
    STATUS

    Data Type for the Buffer

    Oracle uses this data type as a storage for the buffer:

    Like Oracle, MariaDB uses an associative array as a storage for the buffer.

    Data Type Used for GET_LINES()

    This functionality is not implemented.

    In Oracle, the function GET_LINES() returns an array of strings of this data type:

    MariaDB does not have array data types in the C and C++ connectors, so they can't take advantage of GET_LINES() in a client program.

    Fetching all Lines in a PL/SQL Program

    Fetching all lines in a PL/SQL program is implemented using a loop of sys.DBMS_OUTPUT.GET_LINE() calls:

    Fetching all Lines on the Client Side

    Fetching all lines on the client side (for instance, in a C program using Connector/C) is done by using a loop of sys.DBMS_OUTPUT.GET_LINE() queries.

    Limits

    Oracle has the following limits:

    • The maximum individual line length (sent to DBMS_OUTPUT) is 32767 bytes.

    • The default buffer size is 20000 bytes. The minimum size is 2000 bytes. The maximum is unlimited.

    MariaDB also implements some limits, either using the total size of all rows or using the row count.

    Installation

    Like other bootstrap scripts, the script creating DBMS_OUTPUT:

    • Is put into a new separate /scripts/dbms_ouput.sql file in the source directory;

    • Is installed into /share/dbms_ouput.sql of the installation directory.

    \

    CONNECT

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Note: You can download a PDF version of the CONNECT documentation (1.7.0003):

    1MB
    connect_1_7_03.pdf
    PDF
    Open
    Connect Version
    Introduced
    Maturity

    Connect 1.07.0002

    The CONNECT storage engine enables MariaDB to access external local or remote data (MED). This is done by defining tables based on different data types, in particular files in various formats, data extracted from other DBMS or products (such as Excel or MongoDB) via ODBC or JDBC, or data retrieved from the environment (for example DIR, WMI, and MAC tables)

    This storage engine supports table partitioning, MariaDB virtual columns and permits defining_special_ columns such as ROWID, FILEID, and SERVID.

    No precise definition of maturity exists. Because CONNECT handles many table types, each type has a different maturity depending on whether it is old and well-tested, less well-tested or newly implemented. This is indicated for all data types.

    Stored Function Overview

    A Stored Function is a set of SQL statements that can be called by name, accepts parameters, and returns a single value, enhancing SQL with custom logic.

    A Stored Function is a defined function that is called from within an SQL statement like a regular function and returns a single value.

    Creating Stored Functions

    Here's a skeleton example to see a stored function in action:

    First, the delimiter is changed, since the function definition will contain the regular semicolon delimiter. See Delimiters in the mariadb client for more. Then the function is named FortyTwo and defined to return a tinyin. The DETERMINISTIC keyword is not necessary in all cases (although if binary logging is on, leaving it out will throw an error), and is to help the query optimizer choose a query plan. A deterministic function is one that, given the same arguments, will always return the same result.

    Next, the function body is placed between statements. It declares a tinyint, X, which is simply set to 42, and this is the result returned.

    Of course, a function that doesn't take any arguments is of little use. Here's a more complex example:

    This function takes an argument, price which is defined as a DECIMAL, and returns an INT.

    Take a look at the page for more details.

    It is also possible to create .

    Stored Function Listings and Definitions

    To find which stored functions are running on the server, use :

    Alternatively, query the in the INFORMATION_SCHEMA database directly:

    To find out what the stored function does, use :

    Dropping and Updating Stored Functions

    To drop a stored function, use the statement.

    To change the characteristics of a stored function, use . Note that you cannot change the parameters or body of a stored function using this statement; to make such changes, you must drop and re-create the function using DROP FUNCTION and CREATE FUNCTION.

    Permissions in Stored Functions

    See the article .

    See Also

    This page is licensed: CC BY-SA / Gnu FDL

    Installing CONNECT

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    The CONNECT storage engine enables MariaDB to access external local or remote data (MED). This is done by defining tables based on different data types, in particular files in various formats, data extracted from other DBMS or products (such as Excel or MongoDB) via ODBC or JDBC, or data retrieved from the environment (for example DIR, WMI, and MAC tables)

    This storage engine supports table partitioning, MariaDB virtual columns and permits defining special columns such as ROWID, FILEID, and SERVID.

    The storage engine must be installed before it can be used.

    Installing the Plugin's Package

    The CONNECT storage engine's shared library is included in MariaDB packages as the ha_connect.so or ha_connect.so shared library on systems where it can be built.

    Installing on Linux

    The CONNECT storage engine is included in on Linux.

    Installing with a Package Manager

    The CONNECT storage engine can also be installed via a package manager on Linux. In order to do so, your system needs to be configured to install from one of the MariaDB repositories.

    You can configure your package manager to install it from MariaDB Corporation's MariaDB Package Repository by using the .

    You can also configure your package manager to install it from MariaDB Foundation's MariaDB Repository by using the .

    Installing with yum/dnf

    On RHEL, CentOS, Fedora, and other similar Linux distributions, it is highly recommended to install the relevant from MariaDB's repository using or . Starting with RHEL 8 and Fedora 22, yum has been replaced by dnf, which is the next major version of yum. However, yum commands still work on many systems that use dnf:

    Installing with apt-get

    On Debian, Ubuntu, and other similar Linux distributions, it is highly recommended to install the relevant from MariaDB's repository using :

    Installing with zypper

    On SLES, OpenSUSE, and other similar Linux distributions, it is highly recommended to install the relevant from MariaDB's repository using :

    Installing the Plugin

    Once the shared library is in place, the plugin is not actually installed by MariaDB by default. There are two methods that can be used to install the plugin with MariaDB.

    The first method can be used to install the plugin without restarting the server. You can install the plugin dynamically by executing or :

    The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the or the options. This can be specified as a command-line argument to or it can be specified in a relevant server in an :

    Uninstalling the Plugin

    You can uninstall the plugin dynamically by executing or :

    If you installed the plugin by providing the or the options in a relevant server in an , then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.

    Installing Dependencies

    The CONNECT storage engine has some external dependencies.

    Installing unixODBC

    The CONNECT storage engine requires an ODBC library. On Unix-like systems, that usually means installing . On some systems, this is installed as the unixODBC package:

    On other systems, this is installed as the libodbc1 package:

    If you do not have the ODBC library installed, then you may get an error about a missing library when you attempt to install the plugin:

    See Also

    This page is licensed: GPLv2

    Setting up a Replica with mariadb-backup

    Initialize a replication slave using a backup. This guide shows how to use mariadb-backup to provision a new replica from a master server.

    The terms master and slave have historically been used in replication, and MariaDB has begun the process of adding primary and replica synonyms. The old terms will continue to be used to maintain backward compatibility - see MDEV-18777 to follow progress on this effort.

    mariadb-backup was previously called mariabackup.

    This page documents how to set up a replica from a backup.

    If you are using MariaDB Galera Cluster, then you may want to try one of the following pages instead:

    Back up the Database and Prepare it

    The first step is to simply take and prepare a fresh full backup of a database server in the replication topology. If the source database server is the desired replication primary, then we do not need to add any additional options when taking the full backup. For example:

    If the source database server is a replica of the desired primary, then we should add the --slave-info option, and possibly the --safe-slave-backup option. For example:

    And then we would prepare the backup as you normally would. For example:

    Copy the Backup to the New Replica

    Once the backup is done and prepared, we can copy it to the new replica. For example:

    Restore the Backup on the New Replica

    At this point, we can restore the backup to the datadir, as you normally would. For example:

    And adjusting file permissions, if necessary:

    Create a Replication User on the Primary

    Before the new replica can begin replicating from the primary, we need to create a user account on the primary that the replica can use to connect, and we need to grant the user account the REPLICATION SLAVE privilege. For example:

    Configure the New Replica

    Before we start the server on the new replica, we need to configure it. At the very least, we need to ensure that it has a unique server_id value. We also need to make sure other replication settings are what we want them to be, such as the various GTID system variables, if those apply in the specific environment.

    Once configuration is done, we can on the new replica.

    Start Replication on the New Replica

    At this point, we need to get the replication coordinates of the primary from the original backup directory.

    If we took the backup on the primary, then the coordinates are in the xtrabackup_binlog_info file. If we took the backup on another replica and if we provided the --slave-info option, then the coordinates are in the file xtrabackup_slave_info file.

    mariadb-backup dumps replication coordinates in two forms: GTID coordinates and binary log file and position coordinates, like the ones you would normally see from SHOW MASTER STATUS output. We can choose which set of coordinates we would like to use to set up replication.

    For example:

    Regardless of the coordinates we use, we will have to set up the primary connection using CHANGE MASTER TO and then start the replication threads with START SLAVE.

    GTIDs

    If we want to use GTIDs, then we will have to first set gtid_slave_pos to the GTID coordinates that we pulled from either the xtrabackup_binlog_info file or the xtrabackup_slave_info file in the backup directory. For example:

    And then we would set MASTER_USE_GTID=slave_pos in the CHANGE MASTER TO statement. For example:

    File and Position

    If we want to use the binary log file and position coordinates, then we would set MASTER_LOG_FILE and MASTER_LOG_POS in the CHANGE MASTER TO statement to the file and position coordinates that we pulled; either the xtrabackup_binlog_info file or the xtrabackup_slave_info file in the backup directory, depending on whether the backup was taken from the primary or from a replica of the primary. For example:

    Check the Status of the New Replica

    We should be done setting up the replica now, so we should check its status with SHOW SLAVE STATUS. For example:

    This page is licensed: CC BY-SA / Gnu FDL

    Using CONNECT - General Information

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    The main characteristic of CONNECT is to enable accessing data scattered on a machine as if it was a centralized database. This, and the fact that locking is not used by connect (data files are open and closed for each query) makes CONNECT very useful for importing or exporting data into or from a MariaDB database and also for all types of Business Intelligence applications. However, it is not suited for transactional applications.

    For instance, the index type used by CONNECT is closer to bitmap indexing than to B-trees. It is very fast for retrieving result but not when updating is done. In fact, even if only one indexed value is modified in a big table, the index is entirely remade (yet this being four to five times faster than for a b-tree index). But normally in Business Intelligence applications, files are not modified so often.

    If you are using CONNECT to analyze files that can be modified by an external process, the indexes are of course not modified by it and become outdated. Use the OPTIMIZE TABLE command to update them before using the tables based on them.

    This means also that CONNECT is not designed to be used by centralized servers, which are mostly used for transactions and often must run a long time without human intervening.

    Performance

    Performances vary a great deal depending on the table type. For instance, ODBC tables are only retrieved as fast as the other DBMS can do. If you have a lot of queries to execute, the best way to optimize your work can be sometime to translate the data from one type to another. Fortunately this is very simple with CONNECT. Fixed formats like FIX, BIN or VEC tables can be created from slower ones by commands such as:

    FIX and BIN are often the better choice because the I/O functions are done on blocks of BLOCK_SIZE rows. VEC tables can be very efficient for tables having many columns only a few being used in each query. Furthermore, for tables of reasonable size, the MAPPED option can very often speed up many queries.

    Create Table statement

    Be aware of the two broad kinds of CONNECT tables:

    Drop Table statement

    For outward tables, the statement just removes the table definition but does not erase the table data. However, dropping an inward tables also erase the table data as well.

    Alter Table statement

    Be careful using the statement. Currently the data compatibility is not tested and the modified definition can become incompatible with the data. In particular, Alter modifies the table definition only but does not modify the table data. Consequently, the table type should not be modified this way, except to correct an incorrect definition. Also adding, dropping or modifying columns may be wrong because the default offset values (when not explicitly given by the FLAG option) may be wrong when recompiled with missing columns.

    Safe use of ALTER is for indexing, as we have seen earlier, and to change options such as MAPPED or HUGE those do not impact the data format but just the way the data file is accessed. Modifying the BLOCK_SIZE option is all right with FIX, BIN, DBF, split VEC tables; however it is unsafe for VEC tables that are not split (only one data file) because at their creation the estimate size has been made a multiple of the block size. This can cause errors if this estimate is not a multiple of the new value of the block size.

    In all cases, it is safer to drop and re-create the table (outward tables) or to make another one from the table that must be modified.

    Update and Delete for File Tables

    CONNECT can execute these commands using two different algorithms:

    • It can do it in place, directly modifying rows (update) or moving rows (delete) within the table file. This is a fast way to do it in particular when indexing is used.

    • It can do it using a temporary file to make the changes. This is required when updating variable record length tables and is more secure in all cases.

    The choice between these algorithms depends on the session variable .

    This page is licensed: GPLv2

    This page is licensed: CC BY-SA / Gnu FDL

    mariadb-backup was previously called mariabackup.

    For a complete list of mariadb-backup options, .

    For a detailed description of mariadb-backup functionality, .

    mariadb-backup was previously called mariabackup.

    For a complete list of mariadb-backup options, .

    For a detailed description of mariadb-backup functionality, .

    Commonly Used Queries

    This guide provides examples of frequent SQL patterns, such as finding maximum values, calculating averages, and using auto-increment columns.

    This page is intended to be a quick reference of commonly-used and/or useful queries in MariaDB.

    Creating a Table

    See for more.

    String Functions

    Explore MariaDB's built-in string functions for formatting, extracting, and manipulating text data within your queries.

    MariaDB has many built-in functions that can be used to manipulate strings of data. With these functions, one can format data, extract certain characters, or use search expressions. Good developers should be aware of the string functions that are available. Therefore, in this article we will go through several string functions, grouping them by similar features, and provide examples of how they might be used.

    Formatting

    There are several string functions that are used to format text and numbers for nicer display. A popular and very useful function for pasting together the contents of data fields with text is the function. As an example, suppose that a table called contacts has a column for each sales contact's first name and another for the last name. The following SQL statement would put them together:

    This statement will display the first name, a space, and then the last name together in one column. The AS clause will change the column heading of the results to Name.

    Importing Data into MariaDB

    Learn to import data into MariaDB with LOAD DATA INFILE and mariadb-import. This guide covers bulk loading, handling duplicates, and converting foreign data formats.

    When a MariaDB developer first creates a MariaDB database for a client, often times the client has already accumulated data in other, simpler applications. Being able to convert data easily to MariaDB is critical. In the previous two articles of this MariaDB series, we explored how to set up a database and how to query one. In this third installment, we will introduce some methods and tools for bulk importing of data into MariaDB. This isn't an overly difficult task, but the processing of large amounts of data can be intimidating for a newcomer and as a result it can be a barrier to getting started with MariaDB. Additionally, for intermediate developers, there are many nuances to consider for a clean import, which is especially important for automating regularly scheduled imports. There are also restraints to deal with that may be imposed on a developer when using a web hosting company.

    Foreign Data Basics

    Clients sometimes give developers raw data in formats created by simple database programs like MS Access ®. Since non-technical clients don't typically understand database concepts, new clients often give me their initial data in Excel spreadsheets. Let's first look at a simple method for importing data. The simplest way to deal with incompatible data in any format is to load it up in its original software and to export it out to a delimited text file. Most applications have the ability to export data to a text format and will allow the user to set the delimiters. We like to use the bar (i.e.,

    Individual Database Restores with mariadb-backup from Full Backup

    Restore a single database from a full backup. Learn the procedure to extract and recover a specific database schema from a larger backup set.

    This method is to solve a flaw with mariadb-backup; it cannot do single database restores from a full backup easily. There is a , but it's a manual process which is fine for a few tables but if you have hundreds or even thousands of tables then it would be impossible to do quickly.

    We can't just move the data files to the datadir as the tables are not registered in the engines, so the database will error. Currently, the only effective method is to a do full restore in a test database and then dump the database that requires restoring or running a partial backup.

    This has only been tested with InnoDB. Also, if you have stored procedures or triggers then these will need to be deleted and recreated.

    Some of the issues that this method overcomes:

    • Tables not registered in the InnoDB engine so will error when you try to select from a table if you move the data files into the datadir

    Stored Aggregate Functions

    Stored Aggregate Functions allow users to create custom aggregate functions that process a sequence of rows and return a single summary result.

    are functions that are computed over a sequence of rows and return one result for the sequence of rows.

    Creating a custom aggregate function is done using the statement with two main differences:

    • The addition of the AGGREGATE keyword, so CREATE AGGREGATE FUNCTION

    • The FETCH GROUP NEXT ROW instruction inside the loop

    Choosing the Right Storage Engine

    A guide to selecting the appropriate storage engine based on data needs, comparing features of general-purpose, columnar, and specialized engines.

    A high-level overview of the main reasons for choosing a particular storage engine:

    Topic List

    General Purpose

    Using CONNECT - Virtual and Special Columns

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    CONNECT supports MariaDB . It is also possible to declare a column as being a CONNECT special column. Let us see on an example how this can be done. The boys table we have seen previously can be recreated as:

    We have defined two CONNECT special columns. You can give them any name; it is the field SPECIAL option that specifies the special column functional name.

    Note: the default values specified for the special columns do not mean anything. They are specified just to prevent getting warning messages when inserting new rows.

    For the definition of the agehired virtual column, no CONNECT options can be specified as it has no offset or length, not being stored in the file.

    $ mariadb-backup --backup \
       --target-dir=/var/mariadb/backup/ \
       --databases='app1 app2' --tables='tab_[0-9]+' \
       --user=mariadb-backup --password=mypassword
    mariadb-backup --backup --databases="db1" \
      --target-dir=/backup --history=partial_db1
    $ mariadb-backup --prepare --export \
       --target-dir=/var/mariadb/backup/
    mariadb
    mariadb -h 166.78.144.191 -u username -ppassword database_name
    --host=name
    -h name
    --password[=passwd]
    -p[passwd]
    --pipe
    -W
    --port=num
    -P num
    --protocol=name
    --shared-memory-base-name=name
    --socket=name
    -S name
    --ssl
    --ssl-ca=name
    --ssl-capath=name
    --ssl-cert=name
    --ssl-cipher=name
    --ssl-key=name
    --ssl-crl=name
    --ssl-crlpath=name
    --ssl-verify-server-cert
    --user=name
    -u name
    SELECT u.id, u.name, alliance.ally FROM users u JOIN alliance ON
    (u.id=alliance.userId) JOIN team ON (alliance.teamId=team.teamId
    WHERE team.teamName='Legionnaires' AND u.online=1 AND ((u.subscription='paid'
    AND u.paymentStatus='current') OR u.subscription='free') ORDER BY u.name;
    SELECT
        u.id
        , u.name
        , alliance.ally
    FROM
        users u
        JOIN alliance ON (u.id = alliance.userId)
        JOIN team ON (alliance.teamId = team.teamId
    WHERE
        team.teamName = 'Legionnaires'
        AND u.online = 1
        AND (
            (u.subscription = 'paid' AND u.paymentStatus = 'current')
            OR
            u.subscription = 'free'
        )
    ORDER BY
        u.name;
    SELECT *
    FROM
        financial_reportQ_1 AS a
        JOIN sales_renderings AS b ON (a.salesGroup = b.groupId)
        JOIN sales_agents AS c ON (b.groupId = c.group)
    WHERE
        b.totalSales > 10000
        AND c.id != a.clientId
    SELECT *
    FROM
        financial_report_Q_1 AS frq1
        JOIN sales_renderings AS sr ON (frq1.salesGroup = sr.groupId)
        JOIN sales_agents AS sa ON (sr.groupId = sa.group)
    WHERE
        sr.totalSales > 10000
        AND sa.id != frq1.clientId
    SELECT *
    FROM
        family,
        relationships
    WHERE
        family.personId = relationships.personId
        AND relationships.relation = 'father'
    SELECT *
    FROM
        family
        JOIN relationships ON (family.personId = relationships.personId)
    WHERE
        relationships.relation = 'father'
    ERROR 1064: You have an error in your SQL syntax; check the manual that corresponds to your
    MariaDB server version for the right syntax to use near ' ' at line 1
    SELECT * FROM someTable WHERE field = 'value
    SELECT * FROM someTable WHERE field = 1 GROUP BY id,
    SELECT * FROM actionTable WHERE `DELETE` = 1;
    SELECT * FROM a, b JOIN c ON a.x = c.x;
    SELECT * FROM someTable WHERE someId IN (SELECT id FROM someLookupTable);
    SELECT * FROM tableA JOIN tableB ON tableA.x = tableB.y;
    * /etc/my.cnf                              (*nix/BSD)
      * $MYSQL_HOME/my.cnf                       (*nix/BSD) *Most Notably /etc/mysql/my.cnf
      * SYSCONFDIR/my.cnf                        (*nix/BSD)
      * DATADIR\my.ini                           (Windows)
    shell> mariadbd --help --verbose
    mariadbd  Ver 10.11.5-MariaDB for linux-systemd on x86_64 (MariaDB Server)
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Starts the MariaDB database server.
    
    Usage: ./mariadbd [OPTIONS]
    
    Default options are read from the following files in the given order:
    /etc/my.cnf /etc/mysql/my.cnf ~/.my.cnf
    [mysqld]
        ...
        skip-networking
        ...
        bind-address = <some ip-address>
        ...
    [mysqld]
        ...
        #skip-networking
        ...
        #bind-address = <some ip-address>
        ...
    [mysqld]
    skip-networking=0
    skip-bind-address
    shell> ./sql/mariadbd --print-defaults
    ./sql/mariadbd would have been started with the following arguments:
    --bind-address=127.0.0.1 --innodb_file_per_table=ON --server-id=1 --skip-bind-address ...
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 36
    Server version: 5.5.28-MariaDB-mariadb1~lucid mariadb.org binary distribution
    
    Copyright (c) 2000, 2012, Oracle, Monty Program Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]>
    SELECT User, Host FROM mysql.user WHERE Host <> 'localhost';
    +--------+-----------+
    | User   | Host      |
    +--------+-----------+
    | daniel | %         |
    | root   | 127.0.0.1 |
    | root   | ::1       |
    | root   | gandalf   |
    +--------+-----------+
    4 rows in set (0.00 sec)
    GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.100.%' 
      IDENTIFIED BY 'my-new-password' WITH GRANT OPTION;
    firewall-cmd --add-port=3306/tcp 
    firewall-cmd --permanent --add-port=3306/tcp
    BEGIN
    FOR EACH affected TABLE
    SELECT 1 FROM <TABLE> LIMIT 0
    mariadb -u root -p -h localhost
    DELIMITER //
    
    CREATE FUNCTION FortyTwo() RETURNS TINYINT DETERMINISTIC
    BEGIN
     DECLARE x TINYINT;
     SET x = 42;
     RETURN x;
    END 
    
    //
    
    DELIMITER ;
    off
    this
    feature
    to
    get
    a
    quicker
    startup
    with
    -A
    Welcome to the MariaDB monitor. Commands end with ; or \g.
    ...
    MDEV-15636
    MDEV-16791
    see this page
    see this page
    see this page
    see this page
    see this page
    see this page
    dedicated backup and restore page
    download dbForge Studio for a free 30-day trial
    b1
    b2
    b3
    b4
    b5
    b6
    r1
    r2

    This page is licensed: GPLv2

    stop the MariaDB Server process
    start the MariaDB Server process
    see this page
    see this page
    . When trying to insert a row, if its value is higher than the upper limit of the last partition, the row are rejected (with an error, if the
    keyword is not used).
    ADD PARTITION
    REORGANIZE PARTITION
    AUTO_INCREMENT
    UNIX_TIMESTAMP
    IGNORE
    see this page
    see this page

    Since MariaDB/MySQL 5.1.6, selects scan past BLOB columns unless they are specifically requested, making these queries much more efficient.

  • Does not support spatial data types.

  • Does not support transactions.

  • Does not support foreign keys.

  • Does not support virtual columns.

  • No storage limit.

  • Supports row locking.

  • Supports table discovery, and the server can access ARCHIVE tables even if the corresponding .frm file is missing.

  • OPTIMIZE TABLE and REPAIR TABLE can be used to compress the table in its entirety, resulting in slightly better compression.

  • With MariaDB, it is possible to upgrade from the MySQL 5.0 format without having to dump the tables.

  • INSERT DELAYED is supported.

  • Running many SELECTs during the insertions can deteriorate the compression, unless only multi-rows INSERTs and INSERT DELAYED are used.

  • INSTALL SONAME
    INSTALL PLUGIN
    --plugin-load
    --plugin-load-add
    mysqld
    option group
    option file
    UNINSTALL SONAME
    UNINSTALL PLUGIN
    --plugin-load
    --plugin-load-add
    option group
    option file
    INSERT
    SELECT
    DELETE
    UPDATE
    REPLACE
    query cache

    For JDBC, using GET_LINES() is preferable, because it's more efficient than individual GET_LINE() calls.

    - submit a line into the internal buffer.
  • Procedure PUT() - submit a partial line into the buffer.

  • Procedure NEW_LINE() - terminate a line submitted by PUT().

  • Procedure GET_LINE() - read one line (the earliest) from the buffer. When a line is read by GET_LINE(), it's automatically removed from the buffer.

  • Procedure GET_LINES() - read all lines (as an array of strings) from the buffer - this procedure isn't implemented.

  • line1

    0

    line3

    0

    -

    1

    SHOW FUNCTION STATUS
  • Information Schema ROUTINES Table

  • Stored Aggregate Functions.

  • BEGIN and END
    CREATE FUNCTION
    stored aggregate functions
    SHOW FUNCTION STATUS
    routines table
    SHOW CREATE FUNCTION
    DROP FUNCTION
    ALTER FUNCTION
    Stored Routine Privileges
    CREATE FUNCTION
    SHOW CREATE FUNCTION
    DROP FUNCTION
    Stored Routine Privileges
    binary tarballs
    MariaDB Package Repository setup script
    MariaDB Repository Configuration Tool
    RPM package
    yum
    dnf
    DEB package
    apt-get
    RPM package
    zypper
    INSTALL SONAME
    INSTALL PLUGIN
    --plugin-load
    --plugin-load-add
    mysqld
    option group
    option file
    UNINSTALL SONAME
    UNINSTALL PLUGIN
    --plugin-load
    --plugin-load-add
    option group
    option file
    unixODBC
    PLUGIN OVERVIEW

    Inward

    They are table whose file name is not specified at create. An empty file are given a default name (tabname.tabtype) and are populated like for other engines. They do not require the FILE privilege and can be used for testing purpose.

    Outward

    They are all other CONNECT tables and access external data sources or files. They are the true useful tables but require the FILE privilege.

    DROP TABLE
    ALTER TABLE
    connect_use_tempfile

    A less used concatenating function is CONCAT_WS(). It will put together columns with a separator between each. This can be useful when making data available for other programs. For instance, suppose we have a program that will import data, but it requires the fields to be separated by vertical bars. We could just export the data, or we could use a SELECT statement like the one that follows in conjunction with an interface written with an API language like Perl:

    The first element above is the separator. The remaining elements are the columns to be strung together.

    If we want to format a long number with commas every three digits and a period for the decimal point (e.g., 100,000.00), we can use the function FORMAT() like so:

    In this statement, the CONCAT() will place a dollar sign in front of the numbers found in the col5 column, which are formatted with commas by FORMAT(). The 2 within the FORMAT() stipulates two decimal places.

    Occasionally, one will want to convert the text from a column to either all upper-case letters or all lower-case letters. In the example that follows, the output of the first column is converted to upper-case and the second to lower-case:

    When displaying data in forms, it's sometimes useful to pad the data displayed with zeros or dots or some other filler. This can be necessary when dealing with VARCHAR columns where the width varies to help the user to see the column limits. There are two functions that may be used for padding: LPAD() and RPAD().

    In this SQL statement, dots are added to the right end of each part number. So a part number of "H200" will display as "H200....", but without the quotes. Each part's description will have under-scores preceding it. A part with a description of "brass hinge" will display as "brass hinge".

    If a column is a CHAR data-type, a fixed width column, then it may be necessary to trim any leading or trailing spaces from displays. There are a few functions to accomplish this task. The LTRIM() function will eliminate any leading spaces to the left. So "H200" becomes "H200". For columns with trailing spaces, spaces on the right, RTRIM() will work: "H500" becomes "H500". A more versatile trimming function, though, is TRIM(). With it one can trim left, right or both. Below are a few examples:

    In the first TRIM() clause, the padding component is specified; the leading dots are to be trimmed from the output of col1. The trailing spaces are trimmed off of col2—space is the default. Both leading and trailing under-scores are trimmed from col3 above. Unless specified, BOTH is the default. So leading and trailing spaces are trimmed from col4 in the statement here.

    Extracting

    When there is a need to extract specific elements from a column, MariaDB has a few functions that can help. Suppose a column in the table contacts contains the telephone numbers of sales contacts, including the area-codes, but without any dashes or parentheses. The area-code of each could be extracted for sorting with the LEFT() and the telephone number with the RIGHT() function.

    In the LEFT() function above, the column telephone is given along with the number of characters to extract, starting from the first character on the left in the column. The RIGHT() function is similar, but it starts from the last character on the right, counting left to capture, in this statement, the last seven characters. In the SQL statement above, area_code is reused to order the results set. To reformat the telephone number, it are necessary to use the SUBSTRING() function.

    In this SQL statement, the CONCAT() function is employed to assemble some characters and extracted data to produce a common display for telephone numbers (e.g., (504) 555-1234). The first element of the CONCAT() is an opening parenthesis. Next, a LEFT() is used to get the first three characters of telephone, the area-code. After that a closing parenthesis, along with a space is added to the output. The next element uses the SUBSTRING() function to extract the telephone number's prefix, starting at the fourth position, for a total of three characters. Then a dash is inserted into the display. Finally, the function MID() extracts the remainder of the telephone number, starting at the seventh position. The functions MID() and SUBSTRING() are interchangeable and their syntax are the same. By default, for both functions, if the number of characters to capture isn't specified, then it's assumed that the remaining ones are to be extracted.

    Manipulating

    There are a few functions in MariaDB that can help in manipulating text. One such function is REPLACE(). With it every occurrence of a search parameter in a string can be replaced. For example, suppose we wanted to replace the title Mrs. with Ms. in a column containing the person's title, but only in the output. The following SQL would do the trick:

    We're using the ever handy CONCAT() function to put together the contact's name with spaces. The REPLACE() function extracts each title and replaces Mrs. with Ms., where applicable. Otherwise, for all other titles, it displays them unchanged.

    If we want to insert or replace certain text from a column (but not all of its contents), we could use the INSERT() function in conjunction with the LOCATE() function. For example, suppose another contacts table has the contact's title and full name in one column. To change the occurrences of Mrs. to Ms., we could not use REPLACE() since the title is embedded in this example. Instead, we would do the following:

    The first element of the INSERT() function is the column. The second element which contains the LOCATE() is the position in the string that text is to be inserted. The third element is optional; it states the number of characters to overwrite. In this case, Mrs. which is four characters is overwritten with Ms. (the final element), which is only three. Incidentally, if 0 is specified, then nothing is overwritten, text is inserted only. As for the LOCATE() function, the first element is the column and the second the search text. It returns the position within the column where the text is found. If it's not found, then 0 is returned. A value of 0 for the position in the INSERT() function negates it and returns the value of name unchanged.

    On the odd chance that there is a need to reverse the content of a column, there's the REVERSE() function. You would just place the column name within the function. Another minor function is the REPEAT() function. With it a string may be repeated in the display:

    The first component of the function above is the string or column to be repeated. The second component states the number of times it's to be repeated.

    Expression Aids

    The function CHAR_LENGTH() is used to determine the number of characters in a string. This could be useful in a situation where a column contains different types of information of specific lengths. For instance, suppose a column in a table for a college contains identification numbers for students, faculty, and staff. If student identification numbers have eight characters while others have less, the following will count the number of student records:

    The COUNT() function above counts the number of rows that meet the condition of the WHERE clause.

    In a SELECT statement, an ORDER BY clause can be used to sort a results set by a specific column. However, if the column contains IP addresses, a simple sort may not produce the desired results:

    In the limited results above, the IP address 10.0.2.1 should be second. This happens because the column is being sorted lexically and not numerically. The function INET_ATON() will solve this sorting problem.

    Basically, the INET_ATON() function will convert IP addresses to regular numbers for numeric sorting. For instance, if we were to use the function in the list of columns in a SELECT statement, instead of the WHERE clause, the address 10.0.1.1 would return 167772417, 10.0.11.1 will return 167774977, and 10.0.2.1 the number 167772673. As a complement to INET_ATON(), the function INET_NTOA() will translate these numbers back to their original IP addresses.

    MariaDB is fairly case insensitive, which usually is fine. However, to be able to check by case, the STRCMP() function can be used. It converts the column examined to a string and makes a comparison to the search parameter.

    If there is an exact match, the function STRCMP() returns 0. So if col3 here contains "Text", it won't match. Incidentally, if col3 alphabetically is before the string to which it's compared, a -1 are returned. If it's after it, a 1 is returned.

    When you have list of items in one string, the SUBSTRING_INDEX() can be used to pull out a sub-string of data. As an example, suppose we have a column which has five elements, but we want to retrieve just the first two elements. This SQL statement will return them:

    The first component in the function above is the column or string to be picked apart. The second component is the delimiter. The third is the number of elements to return, counting from the left. If we want to grab the last two elements, we would use a negative two to instruct MariaDB to count from the right end.

    Conclusion

    There are more string functions available in MariaDB. A few of the functions mentioned here have aliases or close alternatives. There are also functions for converting between ASCII, binary, hexi-decimal, and octal strings. And there are also string functions related to text encryption and decryption that were not mentioned. However, this article has given you a good collection of common string functions that will assist you in building more powerful and accurate SQL statements.

    This page is licensed: CC BY-SA / Gnu FDL

    CONCAT()

    Tables with foreign keys need to be created without keys, otherwise it will error when you discard the tablespace

    Single Node

    Below is the process to perform a single database restore.

    Firstly, we will need the table structure from a mariadb-dump backup with the --no-data option. I recommend this is done at least once per day or every six hours via a cronjob. As it is just the structure, it are very fast.

    Using SED to return only the table structure we require, then use vim or another text editor to make sure nothing is left.

    Prepare the backup with any incremental-backup-and-restores that you have, and then run the following on the full backup folder using the --export option to generate files with .cfg extensions which InnoDB will look for.

    Once we have done these steps, we can then import the table structure. If you have used the --all-databases option, then you will need to either use SED or open it in a text editor and export out tables that you require. You will also need to log in to the database and create the database if the dump file doesn't. Run the following command below:

    Once the structure is in the database, we have now registered the tables to the engine. Next, we will run the following statements in the information_schema database, to export statements to import/discard table spaces and drop and create foreign keys which we will use later. (edit the CONSTRAINT_SCHEMA and TABLE_SCHEMA WHERE clause to the database you are restoring. Also, add the following lines after your SELECT and before the FROM to have MariaDB export the files to the OS)

    The following are the statements that we will need later.

    Once we have run those statements, and they have been exported to a Linux directory or copied from a GUI interface.

    Run the ALTER DROP KEYS statements in the database.

    Once completed, run the DROP TABLE SPACE statements in the database:

    Exit out the database and change into the directory of the full backup location. Run the following commands to copy all the .cfg and .ibd files to the datadir such as /var/lib/mysql/testdatabase (change the datadir location if needed). Learn more about files that mariadb-backup generates with files-created-by-mariadb-backup.

    After moving the files, it is very important that MySQL is the owner of the files, otherwise it won't have access to them and will error when we import the tablespaces.

    Run the import table spaces statements in the database.

    Run the add key statements in the database

    We have successfully restored a single database. To test that this has worked, we can do a basic check on some tables.

    Replica nodes

    If you have a primary-replica set up, it would be best to follow the sets above for the primary node and then either take a full mariadb-dump or take a new full mariadb-backup and restore this to the replica. You can find more information about restoring a replica with mariadb-backup in Setting up a Replica with mariadb-backup

    After running the below command, copy to the replica and use the LESS linux command to grab the change master statement. Remember to follow this process: Stop replica > restore data > run CHANGE MASTER statement > start replica again.

    Please follow Setting up a Replica with mariadb-backup on restoring a replica with mariadb-backup:

    Galera Cluster

    For this process to work with Galera cluster, we first need to understand that some statements are not replicated across Galera nodes. One of which is the DISCARD and IMPORT for ALTER TABLES statements, and these statements will need to be ran on all nodes. We also need to run the OS level steps on each server as seen below.

    Run the ALTER DROP KEYS statements on ONE NODE as these are replicated.

    Once completed, run the DROP TABLE SPACE statements on EVERY NODE, as these are not replicated.

    Exit out the database and change into the directory of the full backup location. Run the following commands to copy all the .cfg and .ibd files to the datadir such as /var/lib/mysql/testdatabase (change the datadir location if needed). Learn more about files that mariadb-backup generates with files-created-by-mariadb-backup. This step needs to be done on all nodes. You will need to copy the backup files to each node, we can use the same backup on all nodes.

    After moving the files, it is very important that MySQL is the owner of the files, otherwise it won't have access to them and will error when we import the tablespaces.

    Run the import table spaces statements on EVERY NODE.

    Run the add key statements on ONE NODE.

    This page is licensed: CC BY-SA / Gnu FDL

    blog post that details a way to do this

    mariadb-backup was previously called mariabackup.

    $ mariadb-backup --backup \
       --target-dir=/var/mariadb/backup/ \
       --user=mariadb-backup --password=mypassword
    backup_type = full-backuped
    from_lsn = 0
    to_lsn = 1635102
    last_lsn = 1635102
    recover_binlog_info = 0
    $ mariadb-backup --backup \
       --target-dir=/var/mariadb/inc1/ \
       --incremental-basedir=/var/mariadb/backup/ \
       --user=mariadb-backup --password=mypassword
    backup_type = incremental
    from_lsn = 1635102
    to_lsn = 1635114
    last_lsn = 1635114
    recover_binlog_info = 0
    $ mariadb-backup --backup \
       --target-dir=/var/mariadb/inc2/ \
       --incremental-basedir=/var/mariadb/inc1/ \
       --user=mariadb-backup --password=mypassword
    mariadb-backup --backup --target-dir=/full \
      --history=full_backup_1
    mariadb-backup --backup --target-dir=/inc1 \
      --incremental-history-name=full_backup_1 \
      --history=inc_backup_1
    # initial full backup
    $ mariadb-backup --backup --stream=mbstream \
      --user=mariadb-backup --password=mypassword \
      --extra-lsndir=backup_base | gzip > backup_base.gz
    
    # incremental backup
    $ mariadb-backup --backup --stream=mbstream \
      --incremental-basedir=backup_base \
      --user=mariadb-backup --password=mypassword \
      --extra-lsndir=backup_inc1 | gzip > backup-inc1.gz
    $ mariadb-backup --prepare \
       --target-dir=/var/mariadb/backup
    $ mariadb-backup --prepare \
       --target-dir=/var/mariadb/backup \
       --incremental-dir=/var/mariadb/inc1
    $ mariadb-backup --copy-back \
       --target-dir=/var/mariadb/backup/
    $ chown -R mysql:mysql /var/lib/mysql/
    PARTITION BY RANGE (partitioning_expression)
    (
    	PARTITION partition_name VALUES LESS THAN (value),
    	[ PARTITION partition_name VALUES LESS THAN (value), ... ]
    	[ PARTITION partition_name VALUES LESS THAN MAXVALUE ]
    )
    CREATE TABLE log
    (
    	id INT UNSIGNED NOT NULL AUTO_INCREMENT,
    	dt DATETIME NOT NULL,
    	user INT UNSIGNED,
    	PRIMARY KEY (id, dt)
    )
    	ENGINE = InnoDB
    PARTITION BY RANGE (YEAR(dt))
    (
    	PARTITION p0 VALUES LESS THAN (2013),
    	PARTITION p1 VALUES LESS THAN (2014),
    	PARTITION p2 VALUES LESS THAN (2015),
    	PARTITION p3 VALUES LESS THAN (2016)
    );
    CREATE TABLE log2
    (
    	id INT UNSIGNED NOT NULL AUTO_INCREMENT,
    	ts TIMESTAMP NOT NULL,
    	user INT UNSIGNED,
    	PRIMARY KEY (id, ts)
    )
    	ENGINE = InnoDB
    PARTITION BY RANGE (UNIX_TIMESTAMP(ts))
    (
    	PARTITION p0 VALUES LESS THAN (UNIX_TIMESTAMP('2014-08-01 00:00:00')),
    	PARTITION p1 VALUES LESS THAN (UNIX_TIMESTAMP('2014-11-01 00:00:00')),
    	PARTITION p2 VALUES LESS THAN (UNIX_TIMESTAMP('2015-01-01 00:00:00')),
    	PARTITION p3 VALUES LESS THAN (UNIX_TIMESTAMP('2015-02-01 00:00:00'))
    );
    ALTER TABLE log DROP PARTITION p0;
    INSERT INTO log(id,dt) VALUES 
      (1, '2016-01-01 01:01:01'), 
      (2, '2015-01-01 01:01:01');
    ERROR 1526 (HY000): Table has no partition for value 2016
    INSERT IGNORE INTO log(id,dt) VALUES 
      (1, '2016-01-01 01:01:01'), 
      (2, '2015-01-01 01:01:01');
    
    SELECT * FROM log;
    +----+---------------------+------+
    | id | timestamp           | user |
    +----+---------------------+------+
    |  2 | 2015-01-01 01:01:01 | NULL |
    +----+---------------------+------+
    CREATE TABLE log
    (
    	id INT UNSIGNED NOT NULL AUTO_INCREMENT,
    	dt DATETIME NOT NULL,
    	user INT UNSIGNED,
    	PRIMARY KEY (id, dt)
    )
    	ENGINE = InnoDB
    PARTITION BY RANGE (YEAR(dt))
    (
    	PARTITION p0 VALUES LESS THAN (2013),
    	PARTITION p1 VALUES LESS THAN (2014),
    	PARTITION p2 VALUES LESS THAN (2015),
    	PARTITION p3 VALUES LESS THAN (2016),
    	PARTITION p4 VALUES LESS THAN MAXVALUE
    );
    $ mariadb-backup --backup \
       --target-dir=/var/mariadb/backup/ \
       --user=mariadb-backup --password=mypassword
    $ ls /var/mariadb/backup/
    
    aria_log.0000001  mysql                   xtrabackup_checkpoints
    aria_log_control  performance_schema      xtrabackup_info
    backup-my.cnf     test                    xtrabackup_logfile
    ibdata1           xtrabackup_binlog_info
    $ mariadb-backup --backup \
       --target-dir=/var/mariadb/backup/ \
       --user=mariadb-backup --password=mypassword \
       --history=full_backup_weekly
    $ mariadb-backup --prepare \
       --target-dir=/var/mariadb/backup/
    $ mariadb-backup --copy-back \
       --target-dir=/var/mariadb/backup/
    $ chown -R mysql:mysql /var/lib/mysql/
    $ rsync -avrP /var/mariadb/backup /var/lib/mysql/
    $ chown -R mysql:mysql /var/lib/mysql/
    INSTALL SONAME 'ha_archive';
    [mariadb]
    ...
    plugin_load_add = ha_archive
    UNINSTALL SONAME 'ha_archive';
    CALL DBMS_OUTPUT.ENABLE;
    DROP TABLE t1;
    CREATE TABLE t1 (line VARCHAR2(400), status INTEGER);
    DECLARE
      line   VARCHAR2(400);
      status INTEGER;
    BEGIN
      DBMS_OUTPUT.PUT_LINE('line1');
      DBMS_OUTPUT.PUT_LINE('line2');
      DBMS_OUTPUT.GET_LINE(line, status);
      INSERT INTO t1 VALUES (line, status);
      DBMS_OUTPUT.PUT_LINE('line3'); -- This cleares the buffer (removes line2) before putting line3
      LOOP
        DBMS_OUTPUT.GET_LINE(line, status);
        INSERT INTO t1 VALUES (line, status);
        EXIT WHEN status <> 0;
      END LOOP;
    END;
    /
    SELECT * FROM t1;
    TYPE CHARARR IS TABLE OF VARCHAR2(32767) INDEX BY BINARY_INTEGER;
    TYPE DBMSOUTPUT_LINESARRAY IS VARRAY(2147483647) OF VARCHAR2(32767);
    SET sql_mode=ORACLE;
    DELIMITER /
    DECLARE
      all_lines MEDIUMTEXT CHARACTER SET utf8mb4 :='';
      line MEDIUMTEXT CHARACTER SET utf8mb4;
      status INT;
    BEGIN
      sys.DBMS_OUTPUT.PUT_LINE('line1');
      sys.DBMS_OUTPUT.PUT_LINE('line2');
      sys.DBMS_OUTPUT.PUT_LINE('line3');
      LOOP
        sys.DBMS_OUTPUT.GET_LINE(line, status);
        EXIT WHEN status > 0;
        all_lines:= all_lines || line || '\n';
      END LOOP;
      SELECT all_lines;
    END;
    /
    DELIMITER ;
    SELECT FortyTwo();
    +------------+
    | FortyTwo() |
    +------------+
    |         42 |
    +------------+
    DELIMITER //
    CREATE FUNCTION VatCents(price DECIMAL(10,2)) RETURNS INT DETERMINISTIC
    BEGIN
     DECLARE x INT;
     SET x = price * 114;
     RETURN x;
    END //
    Query OK, 0 rows affected (0.04 sec)
    DELIMITER ;
    SHOW FUNCTION STATUS\G
    *************************** 1. row ***************************
                      Db: test
                    Name: VatCents
                    Type: FUNCTION
                 Definer: root@localhost
                Modified: 2013-06-01 12:40:31
                 Created: 2013-06-01 12:40:31
           Security_type: DEFINER
                 Comment: 
    character_set_client: utf8
    collation_connection: utf8_general_ci
      Database Collation: latin1_swedish_ci
    1 row in set (0.00 sec)
    SELECT ROUTINE_NAME FROM INFORMATION_SCHEMA.ROUTINES WHERE
      ROUTINE_TYPE='FUNCTION';
    +--------------+
    | ROUTINE_NAME |
    +--------------+
    | VatCents     |
    +--------------+
    SHOW CREATE FUNCTION VatCents\G
    *************************** 1. row ***************************
                Function: VatCents
                sql_mode: 
         Create Function: CREATE DEFINER=`root`@`localhost` FUNCTION `VatCents`(price DECIMAL(10,2)) RETURNS int(11)
        DETERMINISTIC
    BEGIN
     DECLARE x INT;
     SET x = price * 114;
     RETURN x;
    END
    character_set_client: utf8
    collation_connection: utf8_general_ci
      Database Collation: latin1_swedish_ci
    DROP FUNCTION FortyTwo;
    sudo yum install MariaDB-connect-engine
    sudo apt-get install mariadb-plugin-connect
    sudo zypper install MariaDB-connect-engine
    INSTALL SONAME 'ha_connect';
    [mariadb]
    ...
    plugin_load_add = ha_connect
    UNINSTALL SONAME 'ha_connect';
    sudo yum install unixODBC
    sudo apt-get install libodbc1
    INSTALL SONAME 'ha_connect';
    ERROR 1126 (HY000): Can't open shared library '/home/ian/MariaDB_Downloads/10.1.17/lib/plugin/ha_connect.so' 
      (errno: 2, libodbc.so.1: cannot open shared object file: No such file or directory)
    CREATE TABLE fastable table_specs SELECT * FROM slowtable;
    SELECT CONCAT(name_first, ' ', name_last)
    AS Name
    FROM contacts;
    SELECT CONCAT_WS('|', col1, col2, col3)
    FROM table1;
    SELECT CONCAT('$', FORMAT(col5, 2))
    FROM table3;
    SELECT UCASE(col1),
    LCASE(col2)
    FROM table4;
    SELECT RPAD(part_nbr, 8, '.') AS 'Part Nbr.',
    LPAD(description, 15, '_') AS Description
    FROM catalog;
    SELECT TRIM(LEADING '.' FROM col1),
    TRIM(TRAILING FROM col2),
    TRIM(BOTH '_' FROM col3),
    TRIM(col4)
    FROM table5;
    SELECT LEFT(telephone, 3) AS area_code,
    RIGHT(telephone, 7) AS tel_nbr
    FROM contacts
    ORDER BY area_code;
    SELECT CONCAT('(', LEFT(telephone, 3), ') ',
    SUBSTRING(telephone, 4, 3), '-',
    MID(telephone, 7)) AS 'Telephone Number'
    FROM contacts
    ORDER BY LEFT(telephone, 3);
    SELECT CONCAT(REPLACE(title, 'Mrs.', 'Ms.'),
    ' ', name_first, ' ', name_last) AS Name
    FROM contacts;
    SELECT INSERT(name, LOCATE(name, 'Mrs.'), 4, 'Ms.') 
    FROM contacts;
    SELECT REPEAT(col1, 2)
    FROM table1;
    SELECT COUNT(school_id)
    AS 'Number of Students'
    FROM table8
    WHERE CHAR_LENGTH(school_id)=8;
    SELECT ip_address 
    FROM computers WHERE server='Y' 
    ORDER BY ip_address LIMIT 3;
    
    +-------------+
    | ip_address  |
    +-------------+
    | 10.0.1.1    |
    | 10.0.11.1   |
    | 10.0.2.1    |
    +-------------+
    SELECT ip_address 
    FROM computers WHERE server='Y' 
    ORDER BY INET_ATON(ip_address) LIMIT 3;
    SELECT col1, col2 
    FROM table6 
    WHERE STRCMP(col3, 'text')=0;
    SELECT SUBSTRING_INDEX(col4, '|', 2)
    FROM table7;
    mariadb-dump -u root -p --all-databases --no-data > nodata.sql
    sed -n '/Current Database: `DATABASENAME`/, /Current Database:/p' nodata.sql > trimednodata.sql
    vim trimednodata.sql
    mariadb-backup --prepare --export --target-dir=/media/backups/fullbackupfolder
    mysql -u root -p schema_name < nodata.sql
    SELECT ...
    INTO OUTFILE '/tmp/filename.SQL'
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    FROM ...
    USE information_schema;
    SELECT concat("ALTER TABLE ",table_name," DISCARD TABLESPACE;")  AS discard_tablespace
    FROM information_schema.tables 
    WHERE TABLE_SCHEMA="DATABASENAME";
    
    SELECT concat("ALTER TABLE ",table_name," IMPORT TABLESPACE;") AS import_tablespace
    FROM information_schema.tables 
    WHERE TABLE_SCHEMA="DATABASENAME";
    
    SELECT 
    CONCAT ("ALTER TABLE ", rc.CONSTRAINT_SCHEMA, ".",rc.TABLE_NAME," DROP FOREIGN KEY ", rc.CONSTRAINT_NAME,";") AS drop_keys
    FROM REFERENTIAL_CONSTRAINTS AS rc
    WHERE CONSTRAINT_SCHEMA = 'DATABASENAME';
    
    SELECT
    CONCAT ("ALTER TABLE ", 
    KCU.CONSTRAINT_SCHEMA, ".",
    KCU.TABLE_NAME," 
    ADD CONSTRAINT ", 
    KCU.CONSTRAINT_NAME, " 
    FOREIGN KEY ", "
    (`",KCU.COLUMN_NAME,"`)", " 
    REFERENCES `",REFERENCED_TABLE_NAME,"` 
    (`",REFERENCED_COLUMN_NAME,"`)" ," 
    ON UPDATE " ,(SELECT UPDATE_RULE FROM REFERENTIAL_CONSTRAINTS WHERE CONSTRAINT_NAME = KCU.CONSTRAINT_NAME AND CONSTRAINT_SCHEMA = KCU.CONSTRAINT_SCHEMA)," 
    ON DELETE ",(SELECT DELETE_RULE FROM REFERENTIAL_CONSTRAINTS WHERE CONSTRAINT_NAME = KCU.CONSTRAINT_NAME AND CONSTRAINT_SCHEMA = KCU.CONSTRAINT_SCHEMA),";") AS add_keys
    FROM KEY_COLUMN_USAGE AS KCU
    WHERE KCU.CONSTRAINT_SCHEMA = 'DATABASENAME'
    AND KCU.POSITION_IN_UNIQUE_CONSTRAINT >= 0
    AND KCU.CONSTRAINT_NAME NOT LIKE 'PRIMARY';
    ALTER TABLE schemaname.tablename DROP FOREIGN KEY key_name;
    ...
    ALTER TABLE test DISCARD TABLESPACE;
    ...
    cp *.cfg /var/lib/mysql
    cp *.ibd /var/lib/mysql
    sudo chown -R mysql:mysql /var/lib/mysql
    ALTER TABLE test IMPORT TABLESPACE;
    ...
    ALTER TABLE schmeaname.tablename ADD CONSTRAINT key_name FOREIGN KEY (`column_name`) REFERENCES `foreign_table` (`colum_name`) ON UPDATE NO ACTION ON DELETE NO ACTION;
    ...
    USE DATABASE
    SELECT * FROM test LIMIT 10;
    mariadb-dump -u user -p --single-transaction --master-data=2 > fullbackup.sql
    $ mariadb-backup --backup \
       --slave-info --safe-slave-backup \
       --target-dir=/var/mariadb/backup/ \
       --user=mariadb-backup --password=mypassword
    ALTER TABLE schemaname.tablename DROP FOREIGN KEY key_name;
    ...
    ALTER TABLE test DISCARD TABLESPACE;
    ...
    cp *.cfg /var/lib/mysql
    cp *.ibd /var/lib/mysql
    sudo chown -R mysql:mysql /var/lib/mysql
    ALTER TABLE test IMPORT TABLESPACE;
    ...
    ALTER TABLE schmeaname.tablename ADD CONSTRAINT key_name FOREIGN KEY (`column_name`) REFERENCES `foreign_table` (`colum_name`) ON UPDATE NO ACTION ON DELETE NO ACTION;
    ...
    Inserting Records

    See INSERT for more.

    Using AUTO_INCREMENT

    The AUTO_INCREMENT attribute is used to automatically generate a unique identity for new rows.

    When inserting, the id field can be omitted, and is automatically created.

    See AUTO_INCREMENT for more.

    Querying from two tables on a common value

    This kind of query is called a join - see JOINS for more.

    Finding the Maximum Value

    See the MAX() function for more, as well as Finding the maximum value and grouping the results below for a more practical example.

    Finding the Minimum Value

    See the MIN() function for more.

    Finding the Average Value

    See the AVG() function for more.

    Finding the Maximum Value and Grouping the Results

    See the MAX() function for more.

    Ordering Results

    See ORDER BY for more.

    Finding the Row with the Minimum of a Particular Column

    In this example, we want to find the lowest test score for any student.

    Finding Rows with the Maximum Value of a Column by Group

    This example returns the best test results of each student:

    Calculating Age

    The TIMESTAMPDIFF function can be used to calculate someone's age:

    See TIMESTAMPDIFF() for more.

    Using User-defined Variables

    This example sets a user-defined variable with the average test score, and then uses it in a later query to return all results above the average.

    User-defined variables can also be used to add an incremental counter to a resultset:

    See User-defined Variables for more.

    View Tables in Order of Size

    Returns a list of all tables in the database, ordered by size:

    Removing Duplicates

    This example assumes there's a unique ID, but that all other fields are identical. In the example below, there are 4 records, 3 of which are duplicates, so two of the three duplicates need to be removed. The intermediate SELECT is not necessary, but demonstrates what is being returned.

    CC BY-SA / Gnu FDL

    CREATE TABLE
    |
    , a.k.a. pipe) to separate fields and the line-feed to separate records.

    For the examples in this article, we will assume that a fictitious client's data was in Excel and that the exported text file are named prospects.txt. It contains contact information about prospective customers for the client's sales department, located on the client's intranet site. The data is to be imported into a MariaDB table called prospect_contact, in a database called sales_dept. To make the process simpler, the order and number of columns in MS Excel ® (the format of the data provided by the client) should be the same as the table into which the data is going to be imported. So if prospect_contact has columns that are not included in the spreadsheet, one would make a copy of the spreadsheet and add the missing columns and leave them blank. If there are columns in the spreadsheet that aren't in prospect_contact, one would either add them to the MariaDB table, or, if they're not to be imported, one would delete the extra columns from the spreadsheet. One should also delete any headings and footnotes from the spreadsheet. After this is completed then the data can be exported. Since this is Unix Review, we'll skip how one would export data in Excel and assume that the task was accomplished easily enough using its export wizard.

    The next step is to upload the data text file to the client's web site by FTP. It should be uploaded in ASCII mode. Binary mode may send binary hard-returns for row-endings. Also, it's a good security habit to upload data files to non-public directories. Many web hosting companies provide virtual domains with a directory like /public_html, which is the document root for the Apache web server; it typically contains the site's web pages. In such a situation, / is a virtual root containing logs and other files that are inaccessible to the public. We usually create a directory called tmp in the virtual root directory to hold data files temporarily for importing into MariaDB. Once that's done, all that's required is to log into MariaDB with the mariadb client as an administrative user (if not root, then a user with FILE privileges), and run the proper SQL statement to import the data.

    Loading Data Basics

    The LOAD DATA INFILE statement is the easiest way to import data from a plain text file into MariaDB. Below is what one would enter in the mariadb client to load the data in the file called prospects.txt into the table prospect_contact:

    Before entering the statement above, the MariaDB session would, of course, be switched to the sales_dept database with a USE statement. It is possible, though, to specify the database along with the table name (e.g., sales_dept.prospect_contact). If the server is running Windows, the forward slashes are still used for the text file's path, but a drive may need to be specified at the beginning of the path: 'c:/tmp/prospects.txt'. Notice that the SQL statement above has | as the field delimiter. If the delimiter was [TAB]—which is common—then one would replace | with here. A line-feed () isn't specified as the record delimiter since it's assumed. If the rows start and end with something else, though, then they will need to be stated. For instance, suppose the rows in the text file start with a double-quote and end with a double-quote and a Windows hard-return (i.e., a return and a line-feed). The statement would need to read like this:

    Notice that the starting double-quote is inside of single-quotes. If one needs to specify a single-quote as the start of a line, one could either put the one single-quote within double-quotes or one could escape the inner single-quote with a back-slash, thus telling MariaDB that the single-quote that follows is to be taken literally and is not part of the statement, per se:

    Duplicate Rows

    If the table prospect_contact already contains some of the records that are about to be imported from prospects.txt (that is to say, records with the same primary key), then a decision should be made as to what MariaDB is to do about the duplicates. The SQL statement, as it stands above, will cause MariaDB to try to import the duplicate records and to create duplicate rows in prospect_contact for them. If the table's properties are set not to allow duplicates, then MariaDB will kick out errors. To get MariaDB to replace the duplicate existing rows with the ones being imported in, one would add the REPLACE just before the INTO TABLE clause like this:

    To import only records for prospects that are not already in prospect_contact, one would substitute REPLACE with the IGNORE flag. This instructs MariaDB to ignore records read from the text file that already exist in the table.

    Live Data

    For importing data into a table while it's in use, table access needs to be addressed. If access to the table by other users may not be interrupted, then a LOW_PRIORITY flag can be added to the LOAD DATA INFILE statement. This tells MariaDB that the loading of this data is a low priority. One would only need to change the first line of the SQL statement above to set its priority to low:

    If the LOW_PRIORITY flag isn't included, the table are locked temporarily during the import and other users are prevented from accessing it.

    Being Difficult

    I mentioned earlier that uploading of the text file should not be done in binary mode so as to avoid the difficulties associated with Windows line endings. If this is unavoidable, however, there is an easy way to import binary row-endings with MariaDB. One would just specify the appropriate hexadecimals for a carriage-return combined with a line-feed (i.e., CRLF) as the value of TERMINATED BY:

    Notice that there are intentionally no quotes around the binary value. If there were, MariaDB would take the value for text and not a binary code. The semi-colon is not part of the value; it's the SQL statement terminator.

    Earlier we also stated that the first row in the spreadsheet containing the column headings should be deleted before exporting to avoid the difficulty of importing the headings as a record. It's actually pretty easy to tell MariaDB to just skip the top line. One would add the following line to the very end of the LOAD DATA INFILE statement:

    The number of lines for MariaDB to ignore can, of course, be more than one.

    Another difficulty arises when some Windows application wizards export data with each field surrounded by double-quotes, as well as around the start and end of records. This can be a problem when a field contains a double-quote. To deal with this, some applications use back-slash () to escape embedded double-quotes, to indicate that a particular double-quote is not a field ending but part of the field's content. However, some applications will use a different character (like a pound-sign) to escape embedded quotes. This can cause problems if MariaDB isn't prepared for the odd escape-character. MariaDB will think the escape character is actually text and the embedded quote-mark, although it's escaped, is a field ending. The unenclosed text that follows are imported into the next column and the remaining columns are one column off, leaving the last column not imported. As maddening as this can be, it's quite manageable in MariaDB by adding an ENCLOSED BY and an ESCAPED BY clause:

    In the Foreign Data Basics section above, we said that the columns in the spreadsheet should be put in the same order and quantity as the receiving table. This really isn't necessary if MariaDB is cued in as to what it should expect. To illustrate, let's assume that prospect_contact has four columns in the following order: row_id, name_first, name_last, telephone. Whereas, the spreadsheet has only three columns, differently named, in this order: Last Name, First Name, Telephone. If the spreadsheet isn't adjusted, then the SQL statement will need to be changed to tell MariaDB the field order:

    This SQL statement tells MariaDB the name of each table column associated with each spreadsheet column in the order that they appear in the text file. From there it will naturally insert the data into the appropriate columns in the table. As for columns that are missing like row_id, MariaDB will fill in those fields with the default value if one has been supplied in the table's properties. If not, it will leave the field as NULL. Incidentally, we slipped in the binary [TAB] (0x09) as a field delimiter.

    mariadb-import

    For some clients and for certain situations it may be of value to be able to import data into MariaDB without using the mariadb client. This could be necessary when constructing a shell script to import text files on an automated, regular schedule. To accomplish this, the mariadb-import (mysqlimport before ) utility may be used as it encompasses the LOAD DATA INFILE statement and can easily be run from a script. So if one wants to enter the involved SQL statement at the end of the last section above, the following could be entered from the command-line (i.e., not in the mariadb client):

    Although this statement is written over several lines here, it either has to be on the same line when entered or a space followed by a back-slash has to be entered at the end of each line (as seen here) to indicate that more follows. Since the above is entered at the command-line prompt, the user isn't logged into MariaDB. Therefore the first line contains the user name and password for mariadb-import to give to MariaDB. The password itself is optional, but the directive --password (without the equal sign) isn't. If the password value is not given in the statement, then the user are prompted for it. Notice that the order of directives doesn't matter after the initial command, except that the database and file name go last. Regarding the file name, its prefix must be the same as the table—the dot and the extension are ignored. This requires that prospects.txt be renamed to prospect_contact.txt. If the file isn't renamed, then MariaDB would create a new table called prospects and the --replace option would be pointless. After the file name, incidentally, one could list more text files, separated by a space, to process using mariadb-import. We've added the --verbose directive so as to be able to see what's going on. One probably would leave this out in an automated script. By the way, --low-priority and --ignore-lines are available.

    Web Hosting Restraints

    Some web hosting companies do not allow the use of LOAD DATA INFILE or mariadb-import statements due to security vulnerabilities in these statements for them. To get around this, some extra steps are necessary to avoid having to manually enter the data one row at a time. First, one needs to have MariaDB installed on one's local workstation. For simplicity, we'll assume this is done and is running Linux on the main partition and MS Windows® on an extra partition. Recapping the on-going example of this article based on these new circumstances, one would boot up into Windows and start MS Excel®, load the client's spreadsheet into it and then run the export wizard as before—saving the file prospects.txt to the 'My Documents' directory. Then one would reboot into Linux and mount the Windows partition and copy the data text file to /tmp in Linux, locally. Next one would log into the local (not the client's) MariaDB server and import the text file using a LOAD DATA INFILE as we've extensively outline above. From there one would exit MariaDB and export the data out of MariaDB using the mariadb-dump utility locally, from the command-line like this:

    This creates an interesting text file complete with all of the SQL commands necessary to insert the data back into MariaDB one record, one INSERT at a time. When you run mariadb-import, it's very educational to open up it in a text editor to see what it generates.

    After creating this table dump, one would upload the resulting file (in ASCII mode) to the /tmp directory on the client's web server. From the command prompt on the client's server one would enter the following:

    This line along with the mariadb-dump line show above are simple approaches. Like the Windows application wizard, with mariadb-dump one can specify the format of the output file and several other factors. One important factor related to the scenario used in this article is the CREATE TABLE statement that are embedded in the mariadb-dump output file. This will fail and kick out an error because of the existing table prospect_contact in the client's database. To limit the output to only INSERT statements and no CREATE TABLE statements, the mariadb-dump line would look like this:

    Notice that we've used acceptable abbreviations for the user name and the password directives. Since the password was given here, the user are prompted for it.

    The mariadb-dump utility usually works pretty well. However, one feature it's lacking at this time is a REPLACE flag as is found in the LOAD DATA INFILE statement and with the mariadb-import tool. So if a record already exists in the prospect_contact, it won't be imported. Instead it will kick out an error message and stop at that record, which can be a mess if one has imported several hundred rows and have several hundred more to go. One easy fix for this is to open up prospects.sql in a text editor and do a search on the word INSERT and replace it with REPLACE. The syntax of both of these statements are the same, fortunately. So one would only need to replace the keyword for new records to be inserted and for existing records to be replaced.

    Concluding Observations and Admissions

    It's always amazing to me how much can be involved in the simplest of statements in MariaDB. MariaDB is deceptively powerful and feature rich. One can keep the statements pretty minimal or one can develop a fairly detailed, single statement to allow for accuracy of action. There are many other aspects of importing data into MariaDB that we did not address—in particular dealing with utilities. We also didn't talk about the Perl modules that could be used to convert data files. These can be useful in scripting imports. There are many ways in which one can handle importing data. Hopefully, this article has presented most of the basics and pertinent advanced details that may be of use to most MariaDB developers.

    This page is licensed: CC BY-SA / Gnu FDL

    Oracle PL/SQL compatibility using SQL/PL is provided

    Standard Syntax

    Stored aggregate functions were a project by Varun Gupta.

    Using SQL/PL

    Examples

    First a simplified example:

    A non-trivial example that cannot easily be rewritten using existing functions:

    SQL/PL Example

    This uses the same marks table as created above.

    See Also

    • Stored Function Overview

    • CREATE FUNCTION

    • SHOW CREATE FUNCTION

    • DROP FUNCTION

    This page is licensed: CC BY-SA / Gnu FDL

    Aggregate functions
    CREATE FUNCTION
  • InnoDB is a good general transaction storage engine, and the best choice in most cases. It is the default storage engine.

  • Aria, MariaDB's more modern improvement on MyISAM, has a small footprint and allows for easy copying between systems.

  • MyISAM has a small footprint and allows for easy copying between systems. MyISAM is MySQL's oldest storage engine. There is usually little reason to use it except for legacy purposes. Aria is MariaDB's more modern improvement.

  • XtraDB is no longer available. It was a performance-enhanced fork of InnoDB and was MariaDB's default engine until .

  • Scaling, Partitioning

    When you want to split your database load on several servers or optimize for scaling. We also suggest looking at , a synchronous multi-master cluster.

    • Spider uses partitioning to provide data sharding through multiple servers.

    • utilizes a massively parallel distributed data architecture and is designed for big data scaling to process petabytes of data.

    • The MERGE storage engine is a collection of identical MyISAM tables that can be used as one. "Identical" means that all tables have identical column and index information.

    Compression / Archive

    • MyRocks enables greater compression than InnoDB, as well as less write amplification giving better endurance of flash storage and improving overall throughput.

    • The Archive storage engine is, unsurprisingly, best used for archiving.

    Connecting to Other Data Sources

    When you want to use data not stored in a MariaDB database.

    • The CSV storage engine can read and append to files stored in CSV (comma-separated-values) format. However, since MariaDB 10.0, CONNECT is a better choice and is more flexibly able to read and write such files.

    Search Optimized

    Search engines optimized for search.

    • SphinxSE is used as a proxy to run statements on a remote Sphinx database server (mainly useful for advanced fulltext searches).

    • Mroonga provides fast CJK-ready full text searching using column store.

    Cache, Read-only

    • MEMORY does not write data on-disk (all rows are lost on crash) and is best-used for read-only caches of data from other tables, or for temporary work areas. With the default InnoDB and other storage engines having good caching, there is less need for this engine than in the past.

    Other Specialized Storage Engines

    • S3 Storage Engine is a read-only storage engine that stores its data in Amazon S3.

    • Sequence allows the creation of ascending or descending sequences of numbers (positive integers) with a given starting value, ending value and increment, creating virtual, ephemeral tables automatically when you need them.

    • The BLACKHOLE storage engine accepts data but does not store it and always returns an empty result. This can be useful in replication environments, for example, if you want to run complex filtering rules on a slave without incurring any overhead on a master.

    • allows you to handle hierarchies (tree structures) and complex graphs (nodes having many connections in several directions).

    Alphabetical List

    • The Archive storage engine is, unsurprisingly, best used for archiving.

    • Aria, MariaDB's more modern improvement on MyISAM, has a small footprint and allows for easy copy between systems.

    • The BLACKHOLE storage engine accepts data but does not store it and always returns an empty result. This can be useful in replication environments, for example, if you want to run complex filtering rules on a slave without incurring any overhead on a master.

    • utilizes a massively parallel distributed data architecture and is designed for big data scaling to process petabytes of data.

    • allows access to different kinds of text files and remote resources as if they were regular MariaDB tables.

    • The storage engine can read and append to files stored in CSV (comma-separated-values) format. However, since MariaDB 10.0, CONNECT is a better choice and is more flexibly able to read and write such files.

    • is a good general transaction storage engine, and the best choice in most cases. It is the default storage engine.

    • The storage engine is a collection of identical MyISAM tables that can be used as one. "Identical" means that all tables have identical column and index information.

    • does not write data on-disk (all rows are lost on crash) and is best-used for read-only caches of data from other tables, or for temporary work areas. With the default and other storage engines having good caching, there is less need for this engine than in the past.

    • provides fast CJK-ready full text searching using column store.

    • has a small footprint and allows for easy copying between systems. MyISAM is MySQL's oldest storage engine. There is usually little reason to use it except for legacy purposes. Aria is MariaDB's more modern improvement.

    • enables greater compression than InnoDB, as well as less write amplification giving better endurance of flash storage and improving overall throughput.

    • allows you to handle hierarchies (tree structures) and complex graphs (nodes having many connections in several directions).

    • is a read-only storage engine that stores its data in Amazon S3.

    • allows the creation of ascending or descending sequences of numbers (positive integers) with a given starting value, ending value and increment, creating virtual, ephemeral tables automatically when you need them.

    • is used as a proxy to run statements on a remote Sphinx database server (mainly useful for advanced fulltext searches).

    • uses partitioning to provide data sharding through multiple servers.

    This page is licensed: CC BY-SA / Gnu FDL

    The command:

    will return:

    linenum
    name
    city
    birth
    hired
    agehired
    fn

    1

    John

    Boston

    1986-01-25

    2010-06-02

    24

    d:\mariadb\sql\data\boys.txt

    2

    Existing special columns are listed in the following table:

    Special Name
    Type
    Description of the column value

    ROWID

    Integer

    The row ordinal number in the table. This is not quite equivalent to a virtual column with an auto increment of 1 because rows are renumbered when deleting rows.

    ROWNUM

    Integer

    The row ordinal number in the file. This is different from ROWID for multiple tables, TBL/XCOL/OCCUR/PIVOT tables, XML tables with a multiple column, and for DBF tables where ROWNUM includes soft deleted rows.

    FILEID FDISK FPATH FNAME FTYPE

    String

    FILEID returns the full name of the file this row belongs to. Useful in particular for multiple tables represented by several files. The other special columns can be used to retrieve only one part of the full name.

    TABID

    String

    Note: CONNECT does not currently support auto incremented columns. However, a ROWID special column will do the job of a column auto incremented by 1.

    This page is licensed: GPLv2

    virtual and persistent columns
    mariadb client
    TEXT
    ALTER TABLE
    DROP TABLE
    INSERT
    AUTO_INCREMENT
    SELECT
    SELECT
    LIMIT
    SELECT
    JOIN
    JOIN
    SELECT
    UPDATE
    DELETE
    SELECT

    , , ,

    Stable

    Connect 1.07.0001

    , , ,

    Stable

    Connect 1.06.0010

    , ,

    Stable

    Connect 1.06.0007

    , ,

    Stable

    Connect 1.06.0005

    , ,

    Stable

    Connect 1.06.0004

    , ,

    Stable

    Connect 1.06.0001

    , ,

    Beta

    Connect 1.05.0003

    , ,

    Stable

    Connect 1.05.0001

    ,

    Stable

    Connect 1.04.0008

    ,

    Stable

    Connect 1.04.0006

    , ,

    Stable

    Connect 1.04.0005

    Beta

    Connect 1.04.0003

    Beta

    start the MariaDB Server process

    For a complete list of mariadb-backup options, see this page.

    For a detailed description of mariadb-backup functionality, see this page.

    --skip-grant-tables

    Copying Tables Between Databases and Servers

    This guide explains various methods for copying tables between MariaDB databases and servers, including using FLUSH TABLES FOR EXPORT and mysqldump.

    With MariaDB it's very easy to copy tables between different MariaDB databases and different MariaDB servers. This works for tables created with the Archive, Aria, CSV, InnoDB, MyISAM, MERGE, and XtraDB engines.

    The normal procedures to copy a table is:

    The table files can be found in datadir/databasename (you can executeSELECT @@datadir to find the correct directory). When copying the files, you should copy all files with the same table_name + various extensions. For example, for an Aria table of name foo, you will have files foo.frm, foo.MAI, foo.MAD and possibly foo.TRG if you have triggers.

    If one wants to distribute a table to a user that doesn't need write access to the table and one wants to minimize the storage size of the table, the recommended engine to use is Aria or MyISAM as one can pack the table with aria_pack or myisampack respectively to make it notablly smaller. MyISAM is the most portable format as it's not dependent on whether the server settings are different. Aria and InnoDB require the same block size on both servers.

    Copying Tables When the MariaDB Server is Down

    The following storage engines support export without FLUSH TABLES ... FOR EXPORT, assuming the source server is down and the receiving server is not accessing the files during the copy.

    Engine
    Comment

    Copying Tables Live From a Running MariaDB Server

    For all of the above storage engines (Archive, Aria, CSV, MyISAM and MERGE), one can copy tables even from a live server under the following circumstances:

    • You have done a FLUSH TABLES or FLUSH TABLE table_name for the specific table.

    • The server is not accessing the tables during the copy process.

    The advantage of is that the table is read locked until is executed.

    Warning: If you do the above live copy, you are doing this on your own risk as if you do something wrong, the copied table is very likely to be corrupted. The original table will of course be fine.

    An Efficient Way to Give Someone Else Access to a Read Only Table

    If you want to give a user access to some data in a table for the user to use in their MariaDB server, you can do the following:

    First let's create the table we want to export. To speed up things, we create this without any indexes. We use TRANSACTIONAL=0 ROW_FORMAT=DYNAMIC for Aria to use the smallest possible row format.

    Then we pack it and generate the indexes. We use a big sort buffer to speed up generating the index.

    The procedure for MyISAM tables is identical, except that doesn't have the --ignore-control-file option.

    Copying InnoDB's Transportable Tablespaces

    InnoDB's file-per-table tablespaces are transportable, which means that you can copy a file-per-table tablespace from one MariaDB Server to another server. See for more information.

    Importing Tables

    Tables that use most storage engines are immediately usable when their files are copied to the new .

    However, this is not true for tables that use . InnoDB tables have to be imported with . See for more information.

    See Also

    • - Compressing the MyISAM data file for easier distribution.

    • - Compressing the Aria data file for easier distribution

    This page is licensed: CC BY-SA / Gnu FDL

    mariadb-backup and BACKUP STAGE Commands

    Understand backup locking stages. This page explains how mariadb-backup uses BACKUP STAGE commands to minimize locking during operation.

    mariadb-backup was previously called mariabackup.

    The BACKUP STAGE commands are a set of commands to make it possible to make an efficient external backup tool. How mariadb-backup uses these commands depends on whether you are using the version that is bundled with MariaDB Community Server or the version that is bundled with MariaDB Enterprise Server.

    For a complete list of mariadb-backup options, .

    For a detailed description of mariadb-backup functionality, .

    mariadb-backup and BACKUP STAGE Commands in MariaDB Community Server

    The BACKUP STAGE commands are supported. However, the version of mariadb-backup that is bundled with MariaDB Community Server does not yet use the BACKUP STAGE statement in the most efficient way. mariadb-backup simply executes the following BACKUP STAGE statement to lock the database:

    When the backup is complete, it executes the following BACKUP STAGE statement to unlock the database:

    If you would like to use a version of mariadb-backup that uses the statements in the most efficient way, your best option is to use MariaDB Backup that is bundled with MariaDB Enterprise Server.

    Tasks Performed Prior to BACKUP STAGE in MariaDB Community Server

    • Copy some transactional tables.

      • InnoDB (i.e. ibdataN and file extensions .ibd and .isl)

    • Copy the tail of some transaction logs.

    BACKUP STAGE START in MariaDB Community Server

    mariadb-backup from MariaDB Community Server does not currently perform any tasks in the START stage.

    BACKUP STAGE FLUSH in MariaDB Community Server

    mariadb-backup from MariaDB Community Server does not currently perform any tasks in the FLUSH stage.

    BACKUP STAGE BLOCK_DDL in MariaDB Community Server

    mariadb-backup from MariaDB Community Server does not currently perform any tasks in the BLOCK_DDL stage.

    BACKUP STAGE BLOCK_COMMIT in MariaDB Community Server

    mariadb-backup from MariaDB Community Server performs the following tasks in the BLOCK_COMMIT stage:

    • Copy other files.

      • i.e. file extensions .frm, .isl, .TRG, .TRN, .opt, .par

    BACKUP STAGE END in MariaDB Community Server

    mariadb-backup from MariaDB Community Server performs the following tasks in the END stage:

    • Copy the MyRocks checkpoint into the backup.

    mariadb-backup and BACKUP STAGE Commands in MariaDB Enterprise Server

    The following sections describe how the MariaDB Backup version of mariadb-backup that is bundled with MariaDB Enterprise Server uses each command in an efficient way.

    BACKUP STAGE START in MariaDB Enterprise Server

    mariadb-backup from MariaDB Enterprise Server performs the following tasks in the START stage:

    • Copy all transactional tables.

      • InnoDB (i.e. ibdataN and file extensions .ibd and .isl)

      • Aria (i.e. aria_log_control and file extensions .MAD

    BACKUP STAGE FLUSH in MariaDB Enterprise Server

    mariadb-backup from MariaDB Enterprise Server performs the following tasks in the FLUSH stage:

    • Copy all non-transactional tables that are not in use. This list of used tables is found with SHOW OPEN TABLES.

      • MyISAM (i.e. file extensions .MYD and .MYI)

      • MERGE

    BACKUP STAGE BLOCK_DDL in MariaDB Enterprise Server

    mariadb-backup from MariaDB Enterprise Server performs the following tasks in the BLOCK_DDL stage:

    • Copy other files.

      • i.e. file extensions .frm, .isl, .TRG, .TRN, .opt, .par

    BACKUP STAGE BLOCK_COMMIT in MariaDB Enterprise Server

    mariadb-backup from MariaDB Enterprise Server performs the following tasks in the BLOCK_COMMIT stage:

    • Create a MyRocks checkpoint using the rocksdb_create_checkpoint system variable.

    • Copy changes to system log tables.

      • mysql.general_log

      • mysql.slow_log

    BACKUP STAGE END in MariaDB Enterprise Server

    mariadb-backup from MariaDB Enterprise Server performs the following tasks in the END stage:

    • Copy the MyRocks checkpoint into the backup.

    This page is licensed: CC BY-SA / Gnu FDL

    Storage Engines Overview

    An introduction to MariaDB's pluggable storage engine architecture, highlighting key engines like InnoDB, MyISAM, and Aria for different workloads.

    Overview

    MariaDB features pluggable storage engines to allow per-table workload optimization.

    A storage engine is a type of plugin for MariaDB:

    • Different storage engines may be optimized for different workloads, such as transactional workloads, analytical workloads, or high throughput workloads.

    • Different storage engines may be designed for different use cases, such as federated table access, table sharding, and table archiving in the cloud.

    • Different tables on the same server may use different storage engines.

    Engine
    Target
    Optimization
    Availability

    Examples

    Identify the Default Storage Engine

    Identify the server's global default storage engine by using to query the system variable:

    Identify the session's default storage engine by using :

    Set the Default Storage Engine

    Global default storage engine:

    Session default storage engine supersedes global default during this session:

    Configure the Default Storage Engine

    Identify Available Storage Engines

    Choose Storage Engine for a New Table

    Storage engine is specified at time of table creation using a ENGINE = parameter.

    Resources

    Engines for System Tables

    Standard MariaDB storage engines are used for System Table storage:

    FAQ

    Can I use more than one storage engine on a server?

    • Yes, different tables can use different storage engines on the same server.

    • To create a table with a specific storage engine, specify the ENGINE table option to the statement.

    Can I use more than one storage engine in a single query?

    • Yes, a single query can reference tables that use multiple storage engines.

    • In some cases, special configuration may be required. For example, ColumnStore requires cross engine joins to be configured.

    What storage engine should I use for transactional or OLTP workloads?

    • is the recommended storage engine for transactional or OLTP workloads.

    What storage engine should I use for analytical or OLAP workloads?

    • is the recommended storage engine for analytical or OLAP workloads.

    What storage engine should I use if my application performs both transactional and analytical queries?

    An application that performs both transactional and analytical queries is known as .

    HTAP can be implemented with MariaDB by using for transactional queries and for analytical queries.

    Reference

    MariaDB Server Reference

    • .

    • , which shows available storage engines.

    • , which shows storage engine by table.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    BLACKHOLE

    The BLACKHOLE storage engine discards all data written to it but records operations in the binary log, useful for replication filtering and testing.

    The BLACKHOLE storage engine accepts data but does not store it and always returns an empty result.

    A table using the BLACKHOLE storage engine consists of a single .frm table format file, but no associated data or index files.

    This storage engine can be useful, for example, if you want to run complex filtering rules on a slave without incurring any overhead on a master. The master can run a BLACKHOLE storage engine, with the data replicated to the slave for processing.

    Installing the Plugin

    Although the plugin's shared library is distributed with MariaDB by default, the plugin is not actually installed by MariaDB by default. There are two methods that can be used to install the plugin with MariaDB.

    The first method can be used to install the plugin without restarting the server. You can install the plugin dynamically by executing or :

    The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the or the options. This can be specified as a command-line argument to or it can be specified in a relevant server in an :

    Uninstalling the Plugin

    You can uninstall the plugin dynamically by executing or :

    If you installed the plugin by providing the or the options in a relevant server in an , then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.

    Using the BLACKHOLE Storage Engine

    Using with DML

    , , and statements all work with the BLACKHOLE storage engine. However, no data changes are actually applied.

    Using with Replication

    If the binary log is enabled, all SQL statements are logged as usual, and replicated to any slave servers. However, since rows are not stored, it is important to use statement-based rather than the row or mixed format, as and statements are neither logged nor replicated. See .

    Using with Triggers

    Some work with the BLACKHOLE storage engine.

    BEFORE for statements are still activated.

    for and statements are not activated.

    with the FOR EACH ROW clause do not apply, since the tables have no rows.

    Using with Foreign Keys

    Foreign keys are not supported. If you convert an table to BLACKHOLE, then the foreign keys will disappear. If you convert the same table back to InnoDB, then you will have to recreate them.

    Using with Virtual Columns

    If you convert an table which contains to BLACKHOLE, then it produces an error.

    Using with AUTO_INCREMENT

    Because a BLACKHOLE table does not store data, it will not maintain the value. If you are replicating to a table that can handle AUTO_INCREMENT columns, and are not explicitly setting the primary key auto-increment value in the query, or using the statement, inserts will fail on the slave due to duplicate keys.

    Limits

    The maximum key size is:

    • 3500 bytes (>= , , , and )

    • 1000 bytes (<= , , , and ).

    Examples

    This page is licensed: CC BY-SA / Gnu FDL

    CREATE TABLE t1 ( a INT );
    CREATE TABLE t2 ( b INT );
    CREATE TABLE student_tests (
     name CHAR(10), test CHAR(10), 
     score TINYINT, test_date DATE
    );
    INSERT INTO t1 VALUES (1), (2), (3);
    INSERT INTO t2 VALUES (2), (4);
    
    INSERT INTO student_tests 
     (name, test, score, test_date) VALUES
     ('Chun', 'SQL', 75, '2012-11-05'), 
     ('Chun', 'Tuning', 73, '2013-06-14'),
     ('Esben', 'SQL', 43, '2014-02-11'), 
     ('Esben', 'Tuning', 31, '2014-02-09'), 
     ('Kaolin', 'SQL', 56, '2014-01-01'),
     ('Kaolin', 'Tuning', 88, '2013-12-29'), 
     ('Tatiana', 'SQL', 87, '2012-04-28'), 
     ('Tatiana', 'Tuning', 83, '2013-09-30');
    CREATE TABLE student_details (
     id INT NOT NULL AUTO_INCREMENT, name CHAR(10), 
     date_of_birth DATE, PRIMARY KEY (id)
    );
    INSERT INTO student_details (name,date_of_birth) VALUES 
     ('Chun', '1993-12-31'), 
     ('Esben','1946-01-01'),
     ('Kaolin','1996-07-16'),
     ('Tatiana', '1988-04-13');
    
    SELECT * FROM student_details;
    +----+---------+---------------+
    | id | name    | date_of_birth |
    +----+---------+---------------+
    |  1 | Chun    | 1993-12-31    |
    |  2 | Esben   | 1946-01-01    |
    |  3 | Kaolin  | 1996-07-16    |
    |  4 | Tatiana | 1988-04-13    |
    +----+---------+---------------+
    SELECT * FROM t1 INNER JOIN t2 ON t1.a = t2.b;
    SELECT MAX(a) FROM t1;
    +--------+
    | MAX(a) |
    +--------+
    |      3 |
    +--------+
    SELECT MIN(a) FROM t1;
    +--------+
    | MIN(a) |
    +--------+
    |      1 |
    +--------+
    SELECT AVG(a) FROM t1;
    +--------+
    | AVG(a) |
    +--------+
    | 2.0000 |
    +--------+
    SELECT name, MAX(score) FROM student_tests GROUP BY name;
    +---------+------------+
    | name    | MAX(score) |
    +---------+------------+
    | Chun    |         75 |
    | Esben   |         43 |
    | Kaolin  |         88 |
    | Tatiana |         87 |
    +---------+------------+
    SELECT name, test, score FROM student_tests ORDER BY score DESC;
    +---------+--------+-------+
    | name    | test   | score |
    +---------+--------+-------+
    | Kaolin  | Tuning |    88 |
    | Tatiana | SQL    |    87 |
    | Tatiana | Tuning |    83 |
    | Chun    | SQL    |    75 |
    | Chun    | Tuning |    73 |
    | Kaolin  | SQL    |    56 |
    | Esben   | SQL    |    43 |
    | Esben   | Tuning |    31 |
    +---------+--------+-------+
    SELECT name,test, score FROM student_tests WHERE score=(SELECT MIN(score) FROM student);
    +-------+--------+-------+
    | name  | test   | score |
    +-------+--------+-------+
    | Esben | Tuning |    31 |
    +-------+--------+-------+
    SELECT name, test, score FROM student_tests st1 WHERE score = (
      SELECT MAX(score) FROM student st2 WHERE st1.name = st2.name
    ); 
    +---------+--------+-------+
    | name    | test   | score |
    +---------+--------+-------+
    | Chun    | SQL    |    75 |
    | Esben   | SQL    |    43 |
    | Kaolin  | Tuning |    88 |
    | Tatiana | SQL    |    87 |
    +---------+--------+-------+
    SELECT CURDATE() AS today;
    +------------+
    | today      |
    +------------+
    | 2014-02-17 |
    +------------+
    
    SELECT name, date_of_birth, TIMESTAMPDIFF(YEAR,date_of_birth,'2014-08-02') AS age 
      FROM student_details;
    +---------+---------------+------+
    | name    | date_of_birth | age  |
    +---------+---------------+------+
    | Chun    | 1993-12-31    |   20 |
    | Esben   | 1946-01-01    |   68 |
    | Kaolin  | 1996-07-16    |   18 |
    | Tatiana | 1988-04-13    |   26 |
    +---------+---------------+------+
    SELECT @avg_score:= AVG(score) FROM student_tests;
    +-------------------------+
    | @avg_score:= AVG(score) |
    +-------------------------+
    |            67.000000000 |
    +-------------------------+
    
    SELECT * FROM student_tests WHERE score > @avg_score;
    +---------+--------+-------+------------+
    | name    | test   | score | test_date  |
    +---------+--------+-------+------------+
    | Chun    | SQL    |    75 | 2012-11-05 |
    | Chun    | Tuning |    73 | 2013-06-14 |
    | Kaolin  | Tuning |    88 | 2013-12-29 |
    | Tatiana | SQL    |    87 | 2012-04-28 |
    | Tatiana | Tuning |    83 | 2013-09-30 |
    +---------+--------+-------+------------+
    SET @count = 0;
    
    SELECT @count := @count + 1 AS counter, name, date_of_birth FROM student_details;
    +---------+---------+---------------+
    | counter | name    | date_of_birth |
    +---------+---------+---------------+
    |       1 | Chun    | 1993-12-31    |
    |       2 | Esben   | 1946-01-01    |
    |       3 | Kaolin  | 1996-07-16    |
    |       4 | Tatiana | 1988-04-13    |
    +---------+---------+---------------+
    SELECT table_schema AS `DB`, table_name AS `TABLE`, 
      ROUND(((data_length + index_length) / 1024 / 1024), 2) `Size (MB)` 
      FROM information_schema.TABLES 
      ORDER BY (data_length + index_length) DESC;
    
    +--------------------+---------------------------------------+-----------+
    | DB                 | Table                                 | Size (MB) |
    +--------------------+---------------------------------------+-----------+
    | wordpress          | wp_simple_history_contexts            |      7.05 |
    | wordpress          | wp_posts                              |      6.59 |
    | wordpress          | wp_simple_history                     |      3.05 |
    | wordpress          | wp_comments                           |      2.73 |
    | wordpress          | wp_commentmeta                        |      2.47 |
    | wordpress          | wp_simple_login_log                   |      2.03 |
    ...
    CREATE TABLE t (id INT, f1 VARCHAR(2));
    
    INSERT INTO t VALUES (1,'a'), (2,'a'), (3,'b'), (4,'a');
    
    SELECT * FROM t t1, t t2 WHERE t1.f1=t2.f1 AND t1.id<>t2.id AND t1.id=(
      SELECT MAX(id) FROM t tab WHERE tab.f1=t1.f1
    );
    +------+------+------+------+
    | id   | f1   | id   | f1   |
    +------+------+------+------+
    |    4 | a    |    1 | a    |
    |    4 | a    |    2 | a    |
    +------+------+------+------+
    
    DELETE FROM t WHERE id IN (
      SELECT t2.id FROM t t1, t t2 WHERE t1.f1=t2.f1 AND t1.id<>t2.id AND t1.id=(
        SELECT MAX(id) FROM t tab WHERE tab.f1=t1.f1
      )
    );
    Query OK, 2 rows affected (0.120 sec)
    
    SELECT * FROM t;
    +------+------+
    | id   | f1   |
    +------+------+
    |    3 | b    |
    |    4 | a    |
    +------+------
    LOAD DATA INFILE '/tmp/prospects.txt'
    INTO TABLE prospect_contact
    FIELDS TERMINATED BY '|';
    LOAD DATA INFILE '/tmp/prospects.txt'
    INTO TABLE prospect_contact
    FIELDS TERMINATED BY '|'
    LINES STARTING BY '"'
    TERMINATED BY '"\r\n';
    ...
    LINES STARTING BY '\'' 
    ...
    LOAD DATA INFILE '/tmp/prospects.txt'
    REPLACE INTO TABLE prospect_contact
    FIELDS TERMINATED BY '|'
    LINES STARTING BY '"'
    TERMINATED BY '"\n';
    LOAD DATA LOW_PRIORITY INFILE '/tmp/prospects.txt'
    ...
    ...
    TERMINATED BY 0x0d0a;
    ...
    IGNORE 1 LINES;
    LOAD DATA LOW_PRIORITY INFILE '/tmp/prospects.txt'
    REPLACE INTO TABLE prospect_contact
    FIELDS TERMINATED BY '"'
    ENCLOSED BY '"' ESCAPED BY '#'
    LINES STARTING BY '"'
    TERMINATED BY '"\n'
    IGNORE 1 LINES;
    LOAD DATA LOW_PRIORITY INFILE '/tmp/prospects.txt'
    REPLACE INTO TABLE sales_dept.prospect_contact
    FIELDS TERMINATED BY 0x09
    ENCLOSED BY '"' ESCAPED BY '#'
    TERMINATED BY 0x0d0a
    IGNORE 1 LINES
    (name_last, name_first, telephone);
    mariadb-import --user='marie_dyer' --password='angelle1207' \
    --fields-terminated-by=0x09 --lines-terminated-by=0x0d0a \
    --replace --low-priority --fields-enclosed-by='"' \
     --fields-escaped-by='#' --ignore-lines='1' --verbose \
    --columns='name_last, name_first, telephone' \
    sales_dept '/tmp/prospect_contact.txt'
    mariadb-dump --user='root' --password='geronimo' sales_dept prospect_contact > /tmp/prospects.sql
    mariadb --user='marie_dyer' --password='angelle12107' sales_dept < '/tmp/prospects.sql'
    mariadb-dump -u marie_dyer -p --no-create-info sales_dept prospect_contact > /tmp/prospects.sql
    CREATE AGGREGATE FUNCTION function_name (parameters) RETURNS return_type
    BEGIN
          ALL types of declarations
          DECLARE CONTINUE HANDLER FOR NOT FOUND RETURN return_val;
          LOOP
               FETCH GROUP NEXT ROW; // fetches next row FROM TABLE
               other instructions
          END LOOP;
    END
    SET sql_mode=Oracle;
    DELIMITER //
    
    CREATE AGGREGATE FUNCTION function_name (parameters) RETURN return_type
       declarations
    BEGIN
       LOOP
          FETCH GROUP NEXT ROW; -- fetches next row from table
          -- other instructions
    
       END LOOP;
    EXCEPTION
       WHEN NO_DATA_FOUND THEN
          RETURN return_val;
    END //
    
    DELIMITER ;
    CREATE TABLE marks(stud_id INT, grade_count INT);
    
    INSERT INTO marks VALUES (1,6), (2,4), (3,7), (4,5), (5,8);
    
    SELECT * FROM marks;
    +---------+-------------+
    | stud_id | grade_count |
    +---------+-------------+
    |       1 |           6 |
    |       2 |           4 |
    |       3 |           7 |
    |       4 |           5 |
    |       5 |           8 |
    +---------+-------------+
    
    DELIMITER //
    CREATE AGGREGATE FUNCTION IF NOT EXISTS aggregate_count(x INT) RETURNS INT
    BEGIN
     DECLARE count_students INT DEFAULT 0;
     DECLARE CONTINUE HANDLER FOR NOT FOUND
     RETURN count_students;
          LOOP
              FETCH GROUP NEXT ROW;
              IF x  THEN
                SET count_students = count_students+1;
              END IF;
          END LOOP;
    END //
    DELIMITER ;
    DELIMITER //
    CREATE AGGREGATE FUNCTION medi_int(x INT) RETURNS DOUBLE
    BEGIN
      DECLARE CONTINUE HANDLER FOR NOT FOUND
        BEGIN
          DECLARE res DOUBLE;
          DECLARE cnt INT DEFAULT (SELECT COUNT(*) FROM tt);
          DECLARE lim INT DEFAULT (cnt-1) DIV 2;
          IF cnt % 2 = 0 THEN
            SET res = (SELECT AVG(a) FROM (SELECT a FROM tt ORDER BY a LIMIT lim,2) ttt);
          ELSE
            SET res = (SELECT a FROM tt ORDER BY a LIMIT lim,1);
          END IF;
          DROP TEMPORARY TABLE tt;
          RETURN res;
        END;
      CREATE TEMPORARY TABLE tt (a INT);
      LOOP
        FETCH GROUP NEXT ROW;
        INSERT INTO tt VALUES (x);
      END LOOP;
    END //
    DELIMITER ;
    SET sql_mode=Oracle;
    DELIMITER //
    
    CREATE AGGREGATE FUNCTION aggregate_count(x INT) RETURN INT AS count_students INT DEFAULT 0;
    BEGIN
       LOOP
          FETCH GROUP NEXT ROW;
          IF x  THEN
            SET count_students := count_students+1;
          END IF;
       END LOOP;
    EXCEPTION
       WHEN NO_DATA_FOUND THEN
          RETURN count_students;
    END aggregate_count //
    DELIMITER ;
    
    SELECT aggregate_count(stud_id) FROM marks;
    CREATE TABLE boys (
      linenum INT(6) NOT NULL DEFAULT 0 special=ROWID,
      name CHAR(12) NOT NULL,
      city CHAR(12) NOT NULL,
      birth DATE NOT NULL date_format='DD/MM/YYYY',
      hired DATE NOT NULL date_format='DD/MM/YYYY' flag=36,
      agehired INT(3) AS (floor(datediff(hired,birth)/365.25))
      virtual,
      fn CHAR(100) NOT NULL DEFAULT '' special=FILEID)
    ENGINE=CONNECT table_type=FIX file_name='boys.txt' mapped=YES lrecl=47;
    SELECT * FROM boys WHERE city = 'boston';
    CREATE DATABASE bookstore;
    
    USE bookstore;
    CREATE TABLE books (
    isbn CHAR(20) PRIMARY KEY, 
    title VARCHAR(50),
    author_id INT,
    publisher_id INT,
    year_pub CHAR(4),
    description TEXT );
    DESCRIBE books;
    +--------------+-------------+------+-----+---------+-------+
    | Field        | Type        | Null | Key | Default | Extra |
    +--------------+-------------+------+-----+---------+-------+
    | isbn         | char(20)    | NO   | PRI | NULL    |       |
    | title        | varchar(50) | YES  |     | NULL    |       |
    | author_id    | int(11)     | YES  |     | NULL    |       |
    | publisher_id | int(11)     | YES  |     | NULL    |       |
    | year_pub     | char(4)     | YES  |     | NULL    |       |
    | description  | text        | YES  |     | NULL    |       |
    +--------------+-------------+------+-----+---------+-------+
    CREATE TABLE authors
    (author_id INT AUTO_INCREMENT PRIMARY KEY,
    name_last VARCHAR(50),
    name_first VARCHAR(50),
    country VARCHAR(50) );
    INSERT INTO authors
    (name_last, name_first, country)
    VALUES('Kafka', 'Franz', 'Czech Republic');
    INSERT INTO books
    (title, author_id, isbn, year_pub)
    VALUES('The Castle', '1', '0805211063', '1998');
    INSERT INTO books
    (title, author_id, isbn, year_pub)
    VALUES('The Trial', '1', '0805210407', '1995'),
    ('The Metamorphosis', '1', '0553213695', '1995'),
    ('America', '1', '0805210644', '1995');
    SELECT title 
    FROM books;
    SELECT title 
    FROM books
    LIMIT 5;
    SELECT title, name_last 
    FROM books 
    JOIN authors USING (author_id);
    SELECT title AS 'Kafka Books'
    FROM books 
    JOIN authors USING (author_id)
    WHERE name_last = 'Kafka';
    
    +-------------------+
    | Kafka Books       |
    +-------------------+
    | The Castle        |
    | The Trial         |
    | The Metamorphosis |
    | America           |
    +-------------------+
    UPDATE books
    SET title = 'Amerika'
    WHERE isbn = '0805210644';
    DELETE FROM books
    WHERE author_id = '2034';
    $ mariadb-backup --backup \
       --target-dir=/var/mariadb/backup/ \
       --user=mariadb-backup --password=mypassword
    $ mariadb-backup --backup \
       --slave-info --safe-slave-backup \
       --target-dir=/var/mariadb/backup/ \
       --user=mariadb-backup --password=mypassword
    $ mariadb-backup --prepare \
       --target-dir=/var/mariadb/backup/
    $ rsync -avP /var/mariadb/backup dbserver2:/var/mariadb/backup
    $ mariadb-backup --copy-back \
       --target-dir=/var/mariadb/backup/
    $ chown -R mysql:mysql /var/lib/mysql/
    CREATE USER 'repl'@'dbserver2' IDENTIFIED BY 'password';
    GRANT REPLICATION SLAVE ON *.*  TO 'repl'@'dbserver2';
    mariadb-bin.000096 568 0-1-2
    $ cat xtrabackup_binlog_info
    mariadb-bin.000096 568 0-1-2
    SET GLOBAL gtid_slave_pos = "0-1-2";
    CHANGE MASTER TO 
       MASTER_HOST="dbserver1", 
       MASTER_PORT=3306, 
       MASTER_USER="repl",  
       MASTER_PASSWORD="password", 
       MASTER_USE_GTID=slave_pos;
    START SLAVE;
    CHANGE MASTER TO 
       MASTER_HOST="dbserver1", 
       MASTER_PORT=3306, 
       MASTER_USER="repl",  
       MASTER_PASSWORD="password", 
       MASTER_LOG_FILE='mariadb-bin.000096',
       MASTER_LOG_POS=568;
    START SLAVE;
    SHOW SLAVE STATUS\G
    FLUSH TABLES db_name.table_name FOR EXPORT
    
    # Copy the relevant files associated with the table
    
    UNLOCK TABLES;

    Henry

    Boston

    1987-06-07

    2008-04-01

    20

    d:\mariadb\sql\data\boys.txt

    6

    Bill

    Boston

    1986-09-11

    2008-02-10

    21

    d:\mariadb\sql\data\boys.txt

    The name of the table this row belongs to. Useful for TBL tables.

    PARTID

    String

    The name of the partition this row belongs to. Specific to partitioned tables.

    SERVID

    String

    The name of the federated server or server host used by a MYSQL table. “ODBC” for an ODBC table, "JDBC" for a JDBC table and “Current” for all other tables.

    mariadb-dump - Copying tables to other SQL servers. You can use the --tab to create a CSV file of your table content.

    Archive

    Aria

    Requires clean shutdown. Table will automatically be fixed on the receiving server if aria_chk --zerofill was not run. If aria_chk --zerofill is run, then the table is immediately usable without any delays

    CSV

    MyISAM

    MERGE

    .MRG files can be copied even while server is running as the file only contains a list of tables that are part of merge.

    FLUSH TABLES table_name FOR EXPORT
    UNLOCK TABLES
    myisamchk
    Copying Transportable Tablespaces
    datadir
    InnoDB
    ALTER TABLE ... IMPORT TABLESPACE
    Copying Transportable Tablespaces
    FLUSH TABLES FOR EXPORT
    FLUSH TABLES
    myisampack
    aria_pack

    The tail of the InnoDB redo log (i.e. ib_logfileN files) are copied for InnoDB tables.

    Copy some transactional tables.

    • Aria (i.e. aria_log_control and file extensions .MAD and .MAI)

  • Copy the non-transactional tables.

    • MyISAM (i.e. file extensions .MYD and .MYI)

    • MERGE (i.e. file extensions .MRG)

    • ARCHIVE (i.e. file extensions .ARM and .ARZ)

    • CSV (i.e. file extensions .CSM and .CSV)

  • Create a MyRocks checkpoint using the rocksdb_create_checkpoint system variable.

  • Copy the tail of some transaction logs.

    • The tail of the InnoDB redo log (i.e. ib_logfileN files) are copied for InnoDB tables.

  • Save the binary log position to xtrabackup_binlog_info.

  • Save the Galera Cluster state information to xtrabackup_galera_info.

  • and
    .MAI
    )
  • Copy the tail of all transaction logs.

    • The tail of the InnoDB redo log (i.e. ib_logfileN files) are copied for InnoDB tables.

    • The tail of the Aria redo log (i.e. aria_log.N files) are copied for Aria tables.

  • (i.e. file extensions
    .MRG
    )
  • ARCHIVE (i.e. file extensions .ARM and .ARZ)

  • CSV (i.e. file extensions .CSM and .CSV)

  • Copy the tail of all transaction logs.

    • The tail of the InnoDB redo log (i.e. ib_logfileN files) are copied for InnoDB tables.

    • The tail of the Aria redo log (i.e. aria_log.N files) are copied for Aria tables.

  • Copy the non-transactional tables that were in use during BACKUP STAGE FLUSH.

    • MyISAM (i.e. file extensions .MYD and .MYI)

    • MERGE (i.e. file extensions .MRG)

    • ARCHIVE (i.e. file extensions .ARM and .ARZ)

    • CSV (i.e. file extensions .CSM and .CSV)

  • Check ddl.log for DDL executed before the BLOCK DDL stage.

    • The file names of newly created tables can be read from ddl.log.

    • The file names of dropped tables can also be read from ddl.log.

    • The file names of renamed tables can also be read from ddl.log, so the files can be renamed instead of re-copying them.

  • Copy changes to system log tables.

    • mysql.general_log

    • mysql.slow_log

    • This is easy as these are append only.

  • Copy the tail of all transaction logs.

    • The tail of the InnoDB redo log (i.e. ib_logfileN files) are copied for InnoDB tables.

    • The tail of the Aria redo log (i.e. aria_log.N files) are copied for Aria tables.

  • This is easy as these are append only.

  • Copy changes to statistics tables.

    • mysql.table_stats

    • mysql.column_stats

    • mysql.index_stats

  • Copy the tail of all transaction logs.

    • The tail of the InnoDB redo log (i.e. ib_logfileN files) are copied for InnoDB tables.

    • The tail of the Aria redo log (i.e. aria_log.N files) are copied for Aria tables.

  • Save the binary log position to xtrabackup_binlog_info.

  • Save the Galera Cluster state information to xtrabackup_galera_info.

  • BACKUP STAGE
    BACKUP STAGE
    see this page
    see this page
    CREATE TABLE new_table ... ENGINE=ARIA TRANSACTIONAL=0;
    ALTER TABLE new_table DISABLE_KEYS;
    # Fill the table with data:
    INSERT INTO new_table SELECT * ...
    FLUSH TABLE new_table WITH READ LOCK;
    
    # Copy table data to some external location, like /tmp with something
    # like cp /my/data/test/new_table.* /tmp/
    
    UNLOCK TABLES;
    > ls -l /tmp/new_table.*
    -rw-rw---- 1 mysql my 42396148 Sep 21 17:58 /tmp/new_table.MAD
    -rw-rw---- 1 mysql my     8192 Sep 21 17:58 /tmp/new_table.MAI
    -rw-rw---- 1 mysql my     1039 Sep 21 17:58 /tmp/new_table.frm
    > aria_pack /tmp/new_table
    Compressing /tmp/new_table.MAD: (922666 records)
    - Calculating statistics
    - Compressing file
    46.07%
    > aria_chk -rq --ignore-control-file --sort_buffer_size=1G /tmp/new_table
    Recreating table '/tmp/new_table'
    - check record delete-chain
    - recovering (with sort) Aria-table '/tmp/new_table'
    Data records: 922666
    - Fixing index 1
    State updated
    > ls -l /tmp/new_table.*
    -rw-rw---- 1 mysql my 26271608 Sep 21 17:58 /tmp/new_table.MAD
    -rw-rw---- 1 mysql my 10207232 Sep 21 17:58 /tmp/new_table.MAI
    -rw-rw---- 1 mysql my     1039 Sep 21 17:58 /tmp/new_table.frm
    BACKUP STAGE START;
    BACKUP STAGE BLOCK_COMMIT;
    BACKUP STAGE END;

    Cache, Temp

    Temporary Data

    ES 10.5+

    Reads

    Reads

    ES 10.5+

    Write-Heavy

    I/O Reduction, SSD

    ES 10.5+

    Cloud

    Read-Only

    ES 10.5+

    Federation

    Sharding, Interlink

    ES 10.5+

    Aria

    Read-Heavy

    Reads

    ES 10.5+

    Analytics, HTAP

    Big Data, Analytical

    ES 10.5+

    InnoDB

    General Purpose

    Mixed Read/Write

    SHOW GLOBAL VARIABLES
    default_storage_engine
    SHOW SESSION VARIABLES
    Aria Storage Engine
    MyISAM Storage Engine
    CREATE TABLE
    InnoDB
    hybrid transactional-analytical processing (HTAP)
    InnoDB
    Plugins
    Information Schema ENGINES table
    Information Schema TABLES table

    ES 10.5+

    INSTALL SONAME
    INSTALL PLUGIN
    --plugin-load
    --plugin-load-add
    mysqld
    option group
    option file
    UNINSTALL SONAME
    UNINSTALL PLUGIN
    --plugin-load
    --plugin-load-add
    option group
    option file
    INSERT
    UPDATE
    DELETE
    UPDATE
    DELETE
    Binary Log Formats
    triggers
    triggers
    INSERT
    Triggers
    UPDATE
    DELETE
    Triggers
    InnoDB
    InnoDB
    virtual columns
    AUTO_INCREMENT
    INSERT
    SET
    INSERT_ID
    Stored Routine Privileges
    SHOW FUNCTION STATUS
    Information Schema ROUTINES Table
    OQGRAPH
    CONNECT
    CSV
    InnoDB
    MERGE
    MEMORY
    InnoDB
    Mroonga
    MyISAM
    MyRocks
    OQGRAPH
    S3 Storage Engine
    Sequence
    SphinxSE
    Spider

    Aria Storage Engine

    An overview of Aria, a storage engine designed as a crash-safe alternative to MyISAM, featuring transactional capabilities and improved caching.

    The Aria storage engine is compiled in by default from and it is required to be 'in use' when MariaDB is started.

    All system tables are Aria.

    Additionally, internal on-disk tables are in the Aria table format instead of the MyISAM table format. This should speed up some GROUP BY and DISTINCT queries because Aria has better caching than MyISAM.

    Note: The Aria storage engine was previously called Maria (see The Aria Name for details on the rename) and in previous versions of MariaDB the engine was still called Maria.

    The following table options to Aria tables in CREATE TABLE and ALTER TABLE:

    • TRANSACTIONAL= 0 | 1 : If the TRANSACTIONAL table option is set for an Aria table, then the table are crash-safe. This is implemented by logging any changes to the table to Aria's transaction log, and syncing those writes at the end of the statement. This will marginally slow down writes and updates. However, the benefit is that if the server dies before the statement ends, all non-durable changes will roll back to the state at the beginning of the statement. This also needs up to 6 bytes more for each row and key to store the transaction id (to allow concurrent insert's and selects).

      • TRANSACTIONAL=1 is not supported for partitioned tables.

      • An Aria table's default value for the TRANSACTIONAL table option depends on the table's value for the ROW_FORMAT table option. See below for more details.

      • If the TRANSACTIONAL table option is set for an Aria table, the table does not actually support transactions. See for more information. In this context, transactional just means crash-safe.

    • PAGE_CHECKSUM= 0 | 1 : If index and data should use page checksums for extra safety.

    • TABLE_CHECKSUM= 0 | 1 : Same as CHECKSUM in MySQL 5.1

    • ROW_FORMAT=PAGE | FIXED | DYNAMIC : The table's .

      • The default value is PAGE.

      • To emulate MyISAM, set ROW_FORMAT=FIXED or ROW_FORMAT=DYNAMIC

    The TRANSACTIONAL and ROW_FORMAT table options interact as follows:

    • If TRANSACTIONAL=1 is set, then the only supported row format is PAGE. If ROW_FORMAT is set to some other value, then Aria issues a warning, but still forces the row format to be PAGE.

    • If TRANSACTIONAL=0 is set, then the table are not be crash-safe, and any row format is supported.

    Some other improvements are:

    • now ignores values in NULL fields. This makes CHECKSUM TABLE faster and fixes some cases where same table definition could give different checksum values depending on . The disadvantage is that the value is now different compared to other MySQL installations. The new checksum calculation is fixed for all table engines that uses the default way to calculate and MyISAM which does the calculation internally. Note: Old MyISAM tables with internal checksum returns the same checksum as before. To fix them to calculate according to new rules you have to do an . You can use the old ways to calculate checksums by using the option --old to mariadbdmysqld or set the system variable '@@old' to 1 when you do CHECKSUM TABLE ... EXTENDED;

    Startup Options for Aria

    For a full list, see .

    In normal operations, the only variables you have to consider are:

      • This is where all index and data pages are cached. The bigger this is, the faster Aria will work.

      • The default value 8192, should be ok for most cases. The only problem with a higher value is that it takes longer to find a packed key in the block as one has to search roughly 8192/2 to find each key. We plan to fix this by adding a dictionary at the end of the page to be able to do a binary search within the block before starting a scan. Until this is done and key lookups takes too long time even if you are not hitting disk, then you should consider making this smaller.

    Aria Log Files

    aria_log_control file is a very short log file (52 bytes) that contains the current state of all Aria tables related to logging and checkpoints. In particular, it contains the following information:

    • The uuid is a unique identifier per system. All Aria files created will have a copy of this in their .MAI headers. This is mainly used to check if someone has copied an Aria file between MariaDB servers.

    • last_checkpoint_lsn and last_log_number are information about the current aria_log files.

    • trid is the highest transaction number seen so far. Used by recovery.

    aria_log.* files contain the log of all operations that change Aria files (including create table, drop table, insert etc..) This is a 'normal' WAL (Write Ahead Log), similar to the InnoDB log file, except that aria_logs contain both redo and undo. Old aria_log files are automatically deleted when they are not needed anymore (Neither the last checkpoint or any running transaction need to refer to the old data anymore).

    Missing valid id

    The error Missing valid id at start of file. File is not a valid aria control file means that something overwrote at least the first 4 bytes in the file. This can happen due to a problem with the file system (hardware or software), or a bug in which a thread inside MariaDB wrote on the wrong file descriptor (in which case you should , attaching a copy of the control file to assist).

    In the case of a corrupted log file, with the server shut down, one should be able to fix that by deleting all aria_log files. If the control_file is corrupted, then one has to delete the aria_control_file and all aria_log.* files. The effect of this is that on table open of an Aria table, the server will think that it has been moved from another system and do an automatic check and repair of it. If there was no issues, the table are opened and can be used as normal. See also .

    See Also

    This page is licensed: CC BY-SA / Gnu FDL

    Indexes

    Understand the different types of indexes in MariaDB, such as Primary Keys and Unique Indexes, and how to use them to optimize query performance.

    For a basic overview, see .

    There are four main kinds of indexes; primary keys (unique and not null), unique indexes (unique and can be null), plain indexes (not necessarily unique) and full-text indexes (for full-text searching).

    The terms 'KEY' and 'INDEX' are generally used interchangeably, and statements should work with either keyword.

    Primary Key

    A primary key is unique and can never be null. It will always identify only one record, and each record must be represented. Each table can only have one primary key.

    In tables, all indexes contain the primary key as a suffix. Thus, when using this storage engine, keeping the primary key as small as possible is particularly important. If a primary key does not exist and there are no UNIQUE indexes, InnoDB creates a 6-bytes clustered index which is invisible to the user.

    Altering Tables in MariaDB

    Learn how to modify existing table structures using the ALTER TABLE statement, including adding, changing, and dropping columns and indexes.

    Despite a MariaDB developer's best planning, occasionally one needs to change the structure or other aspects of tables. This is not very difficult, but some developers are unfamiliar with the syntax for the functions used in MariaDB to accomplish this. And some changes can be very frustrating. In this article we'll explore the ways to alter tables in MariaDB and we'll give some precautions about related potential data problems.

    Before Beginning

    For the examples in this article, we will refer to a database called db1 containing a table called clients. The clients

    Files Created by mariadb-backup

    Reference of files generated during backup. This page explains the purpose of metadata files like xtrabackup_checkpoints created by the tool.

    mariadb-backup creates the following files:

    backup-my.cnf

    During the backup, any server options relevant to mariadb-backup are written to the backup-my.cnf option file, so that they can be re-read later during the --prepare

    Using CONNECT - Indexing

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    is one of the main ways to optimize queries. Key columns, in particular when they are used to join tables, should be indexed. But what should be done for columns that have only few distinct values? If they are randomly placed in the table they should not be indexed because reading many rows in random order can be slower than reading the entire table sequentially. However, if the values are sorted or clustered, indexing can be acceptable because indexes store the values in the order they appear into the table and this will make retrieving them almost as fast as reading them sequentially.

    CONNECT provides four indexing types:

    SHOW GLOBAL VARIABLES LIKE 'default_storage_engine';
    +------------------------+--------+
    | Variable_name          | Value  |
    +------------------------+--------+
    | default_storage_engine | InnoDB |
    +------------------------+--------+
    SHOW SESSION VARIABLES LIKE 'default_storage_engine';
    +------------------------+--------+
    | Variable_name          | Value  |
    +------------------------+--------+
    | default_storage_engine | InnoDB |
    +------------------------+--------+
    SET GLOBAL default_storage_engine='MyRocks';
    SET SESSION default_storage_engine='MyRocks';
    [mariadb]
    ...
    default_storage_engine=MyRocks
    SHOW ENGINES;
    CREATE TABLE accounts.messages (
      id INT PRIMARY KEY AUTO_INCREMENT,
      sender_id INT,
      receiver_id INT,
      message TEXT
    ) ENGINE = MyRocks;
    INSTALL SONAME 'ha_blackhole';
    [mariadb]
    ...
    plugin_load_add = ha_blackhole
    UNINSTALL SONAME 'ha_blackhole';
    CREATE TABLE table_name (
       id INT UNSIGNED PRIMARY KEY NOT NULL,
       v VARCHAR(30)
    ) ENGINE=BLACKHOLE;
    
    INSERT INTO table_name VALUES (1, 'bob'),(2, 'jane');
    
    SELECT * FROM table_name;
    Empty set (0.001 sec)
    table is for keeping track of client names and addresses. To start off, we'll enter a
    statement to see what the table looks like:

    This is a very simple table that will hold very little information. However, it's sufficient for the examples here in which we will change several of its columns. Before doing any structural changes to a table in MariaDB, especially if it contains data, one should make a backup of the table to be changed. There are a few ways to do this, but some choices may not be permitted by your web hosting company. Even if your database is on your own server, though, the mariadb-dump utility is typically the best tool for making and restoring backups in MariaDB, and it's generally permitted by web hosting companies. To backup the clients table with mariadb-dump, we will enter the following from the command-line:

    As you can see, the username and password are given on the first line. On the next line, the --add-locks option is used to lock the table before backing up and to unlock automatically it when the backup is finished. There are many other options in mariadb-dump that could be used, but for our purposes this one is all that's necessary. Incidentally, this statement can be entered in one line from the shell (i.e., not from the mariadb client), or it can be entered on multiple lines as shown here by using the back-slash (i.e., /) to let the shell know that more is to follow. On the third line above, the database name is given, followed by the table name. The redirect (i.e., >) tells the shell to send the results of the dump to a text file called clients.sql in the current directory. A directory path could be put in front of the file name to create the file elsewhere. If the table should need to be restored, the following can be run from the shell:

    Notice that this line does not use the mariadb-dump utility. It uses the mariadb client from the outside, so to speak. When the dump file (clients.sql) is read into the database, it will delete the clients table and it's data in MariaDB before restoring the backup copy with its data. So be sure that users haven't added data in the interim. In the examples in this article, we are assuming that there isn't any data in the tables yet.

    Basic Addition and More

    In order to add a column to an existing MariaDB table, one would use the ALTER TABLE statement. To demonstrate, suppose that it has been decided that there should be a column for the client's account status (i.e., active or inactive). To make this change, the following is entered:

    This will add the column status to the end with a fixed width of two characters (i.e., AC for active and IA for inactive). In looking over the table again, it's decided that another field for client apartment numbers or the like needs to be added. That data could be stored in the address column, but it would better for it to be in a separate column. An ALTER TABLE statement could be entered like above, but it will look tidier if the new column is located right after the address column. To do this, we'll use the AFTER option:

    By the way, to add a column to the first position, you would replace the last line of the SQL statement above to read like this:

    Before moving on, let's take a look at the table's structure so far:

    Changing One's Mind

    After looking over the above table display, it's decided that it might be better if the status column has the choices of 'AC' and 'IA' enumerated. To make this change, we'll enter the following SQL statement:

    Notice that the column name status is specified twice. Although the column name isn't being changed, it still must be respecified. To change the column name (from status to active), while leaving the enumerated list the same, we specify the new column name in the second position:

    Here we have the current column name and then the new column name, along with the data type specifications (i.e., ENUM), even though the result is only a name change. With the CHANGE clause everything must be stated, even items that are not to be changed.

    In checking the table structure again, more changes are decided on: The column address is to be renamed to address1 and changed to forty characters wide. Also, the enumeration of active is to have 'yes' and 'no' choices. The problem with changing enumerations is that data can be clobbered in the change if one isn't careful. We've glossed over this possibility before because we are assuming that clients is empty. Let's take a look at how the modifications suggested could be made with the table containing data:

    The first SQL statement above changes address and modifies active in preparation for the transition. Notice the use of a MODIFY clause. It works the same as CHANGE, but it is only used for changing data types and not column names. Therefore, the column name isn't respecified. Notice also that there is a comma after the CHANGE clause. You can string several CHANGE and MODIFY clauses together with comma separators. We've enumerated both the new choices and the old ones to be able to migrate the data. The two UPDATE statements are designed to adjust the data accordingly and the last ALTER TABLE statement is to remove the old enumerated choices for the status column.

    In talking to the boss, we find out that the client_type column isn't going to be used. So we enter the following in MariaDB:

    This deletes client_type and its data, but not the whole table, obviously. Nevertheless, it is a permanent and non-reversible action; there won't be a confirmation request when using the mariadb client. This is how it is with all MariaDB DROP statements and clauses. So be sure that you want to delete an element and its data before using a DROP. As mentioned earlier, be sure that you have a backup of your tables before doing any structured changes.

    The Default

    You may have noticed that the results of the DESCRIBE statements shown before have a heading called 'Default' and just about all of the fields have a default value of NULL. This means that there are no default values and a null value is allowed and are used if a value isn't specified when a row is created. To be able to specify a default value other than NULL, an ALTER TABLE statement can be entered with a SET clause. Suppose we're located in Louisiana and we want a default value of 'LA' for state since that's where our clients are usually located. We would enter the following to set the default:

    Notice that the second line starts with ALTER and not CHANGE. If we change our mind about having a default value for state, we would enter the following to reset it back to NULL (or whatever the initial default value would be based on the data type):

    This particular DROP doesn't delete data, by the way.

    Indexes

    One of the most irritating tasks in making changes to a table for newcomers is dealing with indexes. If they try to rename a column that is indexed by only using an ALTER TABLE statement like we used earlier, they will get a frustrating and confusing error message:

    If they're typing this column change from memory, they will wear themselves out trying different deviations thinking that they remembered the syntax wrong. What most newcomers to MariaDB don't seem to realize is that the index is separate from the indexed column. To illustrate, let's take a look at the index for clients by using the SHOW INDEX statement:

    The text above shows that behind the scenes there is an index associated with cust_id. The column cust_id is not the index. Incidentally, the G at the end of the SHOW INDEX statement is to display the results in portrait instead of landscape format. Before the name of an indexed column can be changed, the index related to it must be eliminated. The index is not automatically changed or deleted. Therefore, in the example above, MariaDB thinks that the developer is trying to create another primary key index. So, a DROP clause for the index must be entered first and then a CHANGE for the column name can be made along with the establishing of a new index:

    The order of these clauses is necessary. The index must be dropped before the column can be renamed. The syntax here is for a PRIMARY KEY. There are other types of indexes, of course. To change a column that has an index type other than a PRIMARY KEY. Assuming for a moment that cust_id has a UNIQUE index, this is what we would enter to change its name:

    Although the index type can be changed easily, MariaDB won't permit you to do so when there are duplicate rows of data and when going from an index that allows duplicates (e.g., INDEX) to one that doesn't (e.g., UNIQUE). If you actually do want to eliminate the duplicates, though, you can add the IGNORE flag to force the duplicates to be deleted:

    In this example, we're not only changing the indexed column's name, but we're also changing the index type from INDEX to UNIQUE. And, again, the IGNORE flag tells MariaDB to ignore any records with duplicate values for cust_id.

    Renaming & Shifting Tables

    The previous sections covered how to make changes to columns in a table. Sometimes you may want to rename a table. To change the name of the clients table to client_addresses we enter this:

    The RENAME TABLE statement will also allows a table to be moved to another database just by adding the receiving database's name in front of the new table name, separated by a dot. Of course, you can move a table without renaming it. To move the newly named client_addresses table to the database db2, we enter this:

    Finally, with tables that contain data (excluding InnoDB tables), occasionally it's desirable to resort the data within the table. Although the ORDER BY clause in a SELECT statement can do this on the fly as needed, sometimes developers want to do this somewhat permanently to the data within the table based on a particular column or columns. It can be done by entering the following:

    Notice that we're sorting by the city first and then by the client's name. Now when the developer enters a SELECT statement without an ORDER BY clause, the results are already ordered by the default of city and then name, at least until more data is added to the table.

    This is not applicable to InnoDB tables, the default, which are ordered according to the clustered index, unless the primary key is defined on the specific columns.

    Summation

    Good planning is certainly important in developing a MariaDB database. However, as you can see, MariaDB is malleable enough that it can be reshaped without much trouble. Just be sure to make a backup before restructuring a table and be sure to check your work and the data when you're finished. With all of this in mind, you should feel comfortable in creating tables since they don't have to be perfect from the beginning.

    This page is licensed: CC BY-SA / Gnu FDL

    DESCRIBE
    Standard Indexing
  • Block Indexing

  • Remote Indexing

  • Dynamic Indexing

  • Standard Indexing

    CONNECT standard indexes are created and used as the ones of other storage engines although they have a specific internal format. The CONNECT handler supports the use of standard indexes for most of the file based table types.

    You can define them in the CREATE TABLE statement, or either using the CREATE INDEX statement or the ALTER TABLE statement. In all cases, the index files are automatically made. They can be dropped either using the DROP INDEX statement or the ALTER TABLE statement, and this erases the index files.

    Indexes are automatically reconstructed when the table is created, modified by INSERT, UPDATE or DELETE commands, or when the SEPINDEX option is changed. If you have a lot of changes to do on a table at one moment, you can use table locking to prevent indexes to be reconstructed after each statement. The indexes are reconstructed when unlocking the table. For instance:

    If a table was modified by an external application that does not handle indexing, the indexes must be reconstructed to prevent returning false or incomplete results. To do this, use the OPTIMIZE TABLE command.

    For outward tables, index files are not erased when dropping the table. This is the same as for the data file and preserves the possibility of several users using the same data file via different tables.

    Unlike other storage engines, CONNECT constructs the indexes as files that are named by default from the data file name, not from the table name, and located in the data file directory. Depending on the SEPINDEX table option, indexes are saved in a unique file or in separate files (if SEPINDEX is true). For instance, if indexes are in separate files, the primary index of the table_dept.dat_ of type DOS is a file named dept_PRIMARY.dnx. This makes possible to define several tables on the same data file, with eventual different options such as mapped or not mapped, and to share the index files as well.

    If the index file should have a different name, for instance because several tables are created on the same data file with different indexes, specify the base index file name with the XFILE_NAME option.

    Note1: Indexed columns must be declared NOT NULL; CONNECT doesn't support indexes containing null values.

    Note 2: MRR is used by standard indexing if it is enabled.

    Note 3: Prefix indexing is not supported. If specified, the CONNECT engine ignores the prefix and builds a whole index.

    Handling index errors

    The way CONNECT handles indexing is very specific. All table modifications are done regardless of indexing. Only after a table has been modified, or when anOPTIMIZE TABLE command is sent are the indexes made. If an error occurs, the corresponding index is not made. However, CONNECT being a non-transactional engine, it is unable to roll back the changes made to the table. The main causes of indexing errors are:

    • Trying to index a nullable column. In this case, you can alter the table to declare the column as not nullable or, if the column is nullable indeed, make it not indexed.

    • Entering duplicate values in a column indexed by a unique index. In this case, if the index was wrongly declared as unique, alter is declaration to reflect this. If the column should really contain unique values, you must manually remove or update the duplicate values.

    In both cases, after correcting the error, remake the indexes with the OPTIMIZE TABLE command.

    Index file mapping

    To accelerate the indexing process, CONNECT makes an index structure in memory from the index file. This can be done by reading the index file or using it as if it was in memory by “file mapping”. On enabled versions, file mapping is used according to the boolean connect_indx_map system variable. Set it to 0 (file read) or 1 (file mapping).

    Block Indexing

    To accelerate input/output, CONNECT uses when possible a read/write mode by blocks of n rows, n being the value given in the BLOCK _ SIZE option of the Create Table, or a default value depending on the table type. This is automatic for fixed files (FIX, BIN, DBF or VEC), but must be specified for variable files (DOS , CSV or FMT ).

    For blocked tables, further optimization can be achieved if the data values for some columns are “clustered” meaning that they are not evenly scattered in the table but grouped in some consecutive rows. Block indexing permits to skip blocks in which no rows fulfill a conditional predicate without having even to read the block. This is true in particular for sorted columns.

    You indicate this when creating the table by using the DISTRIB =d column option. The enum value d can be scattered, clustered, or sorted. In general only one column can be sorted. Block indexing is used only for clustered and sorted columns.

    Difference between standard indexing and block indexing

    • Block indexing is internally handled by CONNECT while reading sequentially a table data. This means in particular that when standard indexing is used on a table, block indexing is not used.

    • In a query, only one standard index can be used. However, block indexing can combine the restrictions coming from a where clause implying several clustered/sorted columns.

    • The block index files are faster to make and much smaller than standard index files.

    Notes for this Release:

    • On all operations that create or modify a table, CONNECT automatically calculates or recalculates and saves the mini/maxi or bitmap values for each block, enabling it to skip block containing no acceptable values. In the case where the optimize file does not correspond anymore to the table, because it has been accidentally destroyed, or because some column definitions have been altered, you can use the OPTIMIZE TABLE command to reconstruct the optimization file.

    • Sorted column special processing is currently restricted to ascending sort. Column sorted in descending order must be flagged as clustered. Improper sorting is not checked in Update or Insert operations but is flagged when optimizing the table.

    • Block indexing can be done in two ways. Keeping the min/max values existing for each block, or keeping a bitmap allowing knowing what column distinct values are met in each blocks. This second ways often gives a better optimization, except for sorted columns for which both are equivalent. The bitmap approach can be done only on columns having not too many distinct values. This is estimated by the MAX _ DIST option value associated to the column when creating the table. Bitmap block indexing are used if this number is not greater than the MAXBMP setting for the database.

    • CONNECT cannot perform block indexing on case insensitive character columns. To force block indexing on a character column, specify its charset as not case insensitive, for instance as binary. However this will also apply to all other clauses, this column being now case sensitive.

    Remote Indexing

    Remote indexing is specific to the MYSQL table type. It is equivalent to what the FEDERATED storage does. A MYSQL table does not support indexes per se. Because access to the table is handled remotely, it is the remote table that supports the indexes. What the MYSQL table does is just to add a WHERE clause to the SELECT command sent to the remote server allowing the remote server to use indexing when applicable. Note however that because CONNECT adds when possible all or part of the where clause of the original query, this happens often even if the remote indexed column is not declared locally indexed. The only, but very important, case a column should be locally declared indexed is when it is used to join tables. Otherwise, the required where clause would not be added to the sent SELECT query.

    See Indexing of MYSQL tables for more.

    Dynamic Indexing

    An indexed created as “dynamic” is a standard index which, in some cases, can be reconstructed for a specific query. This happens in particular for some queries where two tables are joined by an indexed key column. If the “from” table is big and the “to” big table reduced in size because of a where clause, it can be worthwhile to reconstruct the index on this reduced table.

    Because of the time added by reconstructing the index, this is valuable only if the time gained by reducing the index size if more than this reconstruction time. This is why this should not be done if the “from” table is small because there will not be enough row joining to compensate for the additional time. Otherwise, the gain of using a dynamic index is:

    • Indexing time is a little faster if the index is smaller.

    • The join process will return only the rows fulfilling the where clause.

    • Because the table is read sequentially when reconstructing the index there no need for MRR.

    • Constructing the index can be faster if the table is reduced by block indexing.

    • While constructing the index, CONNECT also stores in memory the values of other used columns.

    This last point is particularly important. It means that after the index is reconstructed, the join is done on a temporary memory table.

    Unfortunately, storage engines being called independently by MariaDB for each table, CONNECT has no global information to decide when it is good to use dynamic indexing. This is why you should use it only on cases where you see that some important join queries take a very long time and only on columns used for joining the table. How to declare an index to be dynamic is by using the Boolean DYNAM index option. For instance, the query:

    Such a query joining the diag table to the patients table may last a very long time if the tables are big. To declare the primary key on the pnb column of the patients table to be dynamic:

    Note 1: The comment is not mandatory here but useful to see that the index is dynamic if you use the SHOW INDEX command.

    Note 2: There is currently no way to just change the DYNAM option without dropping and adding the index. This is unfortunate because it takes time.

    Virtual Indexing

    It applies only to the virtual tables of type VIR and must be made on a column specifying SPECIAL=ROWID or SPECIAL=ROWNUM.

    This page is licensed: GPLv2

    Indexing
    CONNECT
    DESCRIBE clients; 
    
    +-------------+-------------+------+-----+---------+-------+
    | Field       | Type        | Null | Key | Default | Extra |
    +-------------+-------------+------+-----+---------+-------+
    | cust_id     | int(11)     |      | PRI | 0       |       |
    | name        | varchar(25) | YES  |     | NULL    |       |
    | address     | varchar(25) | YES  |     | NULL    |       |
    | city        | varchar(25) | YES  |     | NULL    |       |
    | state       | char(2)     | YES  |     | NULL    |       |
    | zip         | varchar(10) | YES  |     | NULL    |       |
    | client_type | varchar(4)  | YES  |     | NULL    |       |
    +-------------+-------------+------+-----+---------+-------+
    mariadb-dump --user='username' --password='password' --add-locks db1 clients > clients.sql
    mariadb --user='username' --password='password' db1 < clients.sql
    ALTER TABLE clients 
    ADD COLUMN status CHAR(2);
    ALTER TABLE clients 
    ADD COLUMN address2 VARCHAR(25) 
    AFTER address;
    ...
    FIRST;
    DESCRIBE clients;
    
    +-------------+-------------+------+-----+---------+-------+
    | Field       | Type        | Null | Key | Default | Extra |
    +-------------+-------------+------+-----+---------+-------+
    | cust_id     | int(11)     |      | PRI | 0       |       |
    | name        | varchar(25) | YES  |     | NULL    |       |
    | address     | varchar(25) | YES  |     | NULL    |       |
    | address2    | varchar(25) | YES  |     | NULL    |       |
    | city        | varchar(25) | YES  |     | NULL    |       |
    | state       | char(2)     | YES  |     | NULL    |       |
    | zip         | varchar(10) | YES  |     | NULL    |       |
    | client_type | varchar(4)  | YES  |     | NULL    |       |
    | status      | char(2)     | YES  |     | NULL    |       |
    +-------------+-------------+------+-----+---------+-------+
    ALTER TABLE clients 
    CHANGE status status ENUM('AC','IA');
    ALTER TABLE clients
    CHANGE status active ENUM('AC','IA');
    ALTER TABLE clients
    CHANGE address address1 VARCHAR(40),
    MODIFY active ENUM('yes','NO','AC','IA');
    
    UPDATE clients
    SET active = 'yes'
    WHERE active = 'AC';
    
    UPDATE clients
    SET active = 'NO'
    WHERE active = 'IA';
    
    ALTER TABLE clients
    MODIFY active ENUM('yes','NO');
    ALTER TABLE clients
    DROP client_type;
    ALTER TABLE clients
    ALTER state SET DEFAULT 'LA';
    ALTER TABLE clients
    ALTER state DROP DEFAULT;
    ALTER TABLE clients
    CHANGE cust_id client_id INT
    PRIMARY KEY;
     
    ERROR 1068: Multiple primary key defined
    SHOW INDEX FROM clients\G
    
    *************************** 1. row ***************************
               TABLE: clients
          Non_unique: 0
            Key_name: PRIMARY
        Seq_in_index: 1
         Column_name: cust_id
           Collation: A
         Cardinality: 0
            Sub_part: NULL
              Packed: NULL
             Comment:
    1 row in set (0.00 sec)
    ALTER TABLE clients
    DROP PRIMARY KEY,
    CHANGE cust_id
    client_id INT PRIMARY KEY;
    ALTER TABLE clients
    DROP UNIQUE cust_id
    CHANGE cust_id
    client_id INT UNIQUE;
    ALTER IGNORE TABLE clients
    DROP INDEX cust_id
    CHANGE cust_id
    client_id INT UNIQUE;
    RENAME TABLE clients 
    TO client_addresses;
    RENAME TABLE client_addresses
    TO db2.client_addresses;
    ALTER TABLE client_addresses
    ORDER BY city, name;
    LOCK TABLE t1 WRITE;
    INSERT INTO t1 VALUES(...);
    INSERT INTO t1 VALUES(...);
    ...
    UNLOCK TABLES;
    SELECT d.diag, COUNT(*) cnt FROM diag d, patients p WHERE d.pnb =
    p.pnb AND ageyears < 17 AND county = 30 AND drg <> 11 AND d.diag
    BETWEEN 4296 AND 9434 GROUP BY d.diag ORDER BY cnt DESC;
    ALTER TABLE patients DROP PRIMARY KEY;
    ALTER TABLE patients ADD PRIMARY KEY (pnb) COMMENT 'DYNAMIC' dynam=1;
    If TRANSACTIONAL is not set to any value, then any row format is supported. If ROW_FORMAT is set, then the table will use that row format. Otherwise, the table will use the default PAGE row format. In this case, if the table uses the PAGE row format, then it are crash-safe. If it uses some other row format, then it will not be crash-safe.

    At startup Aria will check the Aria logs and automatically recover the tables from the last checkpoint if the server was not taken down correctly. See Aria Log Files

  • Possible values to try are 2048, 4096 or 8192

  • Note that you can't change this without dumping, deleting old tables and deleting all log files and then restoring your Aria tables. (This is the only option that requires a dump and load.)

  • aria-log-purge-type

    • Set this to "at_flush" if you want to keep a copy of the transaction logs (good as an extra backup). The logs will stay around until you execute FLUSH ENGINE LOGS.

  • MDEV-21364
    row format
    CHECKSUM TABLE
    row format
    ALTER TABLE
    Aria System Variables
    aria-pagecache-buffer-size
    aria-block-size
    When is it safe to remove old log files
    Aria FAQ

    Many tables use a numeric ID field as a primary key. The AUTO_INCREMENT attribute can be used to generate a unique identity for new rows, and is commonly-used with primary keys.

    Primary keys are usually added when the table is created with the CREATE TABLE statement. For example, the following creates a primary key on the ID field. Note that the ID field had to be defined as NOT NULL, otherwise the index could not have been created.

    You cannot create a primary key with the CREATE INDEX command. If you do want to add one after the table has already been created, use ALTER TABLE, for example:

    Finding Tables Without Primary Keys

    Tables in the INFORMATION_SCHEMAdatabase can be queried to find tables that do not have primary keys. For example, here is a query using the TABLES and KEY_COLUMN_USAGE tables that can be used:

    Unique Index

    A Unique Index must be unique, but it can have columns that may be NULL. So each key value identifies only one record, but not each record needs to be represented.

    MariaDB starting with

    Unique, if index type is not specified, is normally a BTREE index that can also be used by the optimizer to find rows. If the key is longer than the max key length for the used storage engine and the storage engine supports long unique index, a HASH key are created. This enables MariaDB to enforce uniqueness for any type or number of columns.

    For example, to create a unique key on the Employee_Code field, as well as a primary key, use:

    Unique keys can also be added after the table is created with the CREATE INDEX command, or with the ALTER TABLE command, for example:

    and

    Indexes can contain more than one column. MariaDB is able to use one or more columns on the leftmost part of the index, if it cannot use the whole index. (except for the HASH index type).

    Take another example:

    Since the index is defined as unique over both columns a and b, the following row is valid, as while neither a nor b are unique on their own, the combination is unique:

    The fact that a UNIQUE constraint can be NULL is often overlooked. In SQL any NULL is never equal to anything, not even to another NULL. Consequently, a UNIQUE constraint will not prevent one from storing duplicate rows if they contain null values:

    Indeed, in SQL two last rows, even if identical, are not equal to each other:

    In MariaDB you can combine this with virtual columns to enforce uniqueness over a subset of rows in a table:

    This table structure ensures that all active or on-hold users have distinct names, but as soon as a user is deleted, his name is no longer part of the uniqueness constraint, and another user may get the same name.

    If a unique index consists of a column where trailing pad characters are stripped or ignored, inserts into that column where values differ only by the number of trailing pad characters will result in a duplicate-key error.

    MariaDB starting with

    For some engines, like InnoDB, UNIQUE can be used with any type of columns or any number of columns.

    If the key length is longer than the max key length supported by the engine, a HASH key are created. This can be seen with SHOW CREATE TABLE table_name or SHOW INDEX FROM table_name:

    Plain Indexes

    Indexes do not necessarily need to be unique. For example:

    Full-Text Indexes

    Full-text indexes support full-text indexing and searching. See the Full-Text Indexes section.

    Choosing Indexes

    In general, you should only add indexes to match the queries your application uses. Any extra will waste resources. In an application with very small tables, indexes will not make much difference but as soon as your tables are larger than your buffer sizes the indexes will start to speed things up dramatically.

    Using the EXPLAIN statement on your queries can help you decide which columns need indexing.

    If you query contains something like LIKE '%word%', without a fulltext index you are using a full table scan every time, which is very slow.

    If your table has a large number of reads and writes, consider using delayed writes. This uses the db engine in a "batch" write mode, which cuts down on disk io, therefore increasing performance.

    Use the CREATE INDEX command to create an index.

    If you are building a large table then for best performance add the index after the table is populated with data. This is to increase the insert performance and remove the index overhead during inserts.

    Viewing Indexes

    You can view which indexes are present on a table, as well as details about them, with the SHOW INDEX statement.

    If you want to know how to re-create an index, run SHOW CREATE TABLE .

    When to Remove an Index

    If an index is rarely used (or not used at all) then remove it to increase INSERT, and UPDATE performance.

    If user statistics are enabled, the Information Schema INDEX_STATISTICS table stores the index usage.

    If the slow query log is enabled and the log_queries_not_using_indexes server system variable is ON, the queries which do not use indexes are logged.

    The initial version of this article was copied, with permission, from Proper_Indexing_Strategy on 2012-10-30.

    See Also

    • AUTO_INCREMENT

    • The Essentials of an Index

    CC BY-SA / Gnu FDL

    The Essentials of an Index
    InnoDB
    stage.

    ib_logfile0

    mariadb-backup creates an empty InnoDB redo log file called ib_logfile0 as part of the --prepare stage. This file has 3 roles:

    1. In the source server, ib_logfile0 is the first (and possibly the only) InnoDB redo log file.

    2. In the non-prepared backup, ib_logfile0 contains all of the InnoDB redo log copied during the backup.

    3. During the --prepare stage, ib_logfile0 is initialized as an empty InnoDB redo log file. That way, if the backup is manually restored, any pre-existing InnoDB redo log files get overwritten by the empty one. This helps to prevent certain kinds of known issues.

    mariadb_backup_binlog_info

    This file stores the binary log file name and position that corresponds to the backup.

    This file also stores the value of the gtid_current_pos system variable that correspond to the backup, like this:

    mariadb-bin.000096 568 0-1-2

    The values in this file are only guaranteed to be consistent with the backup if the --no-lock option was not provided when the backup was taken.

    xtrabackup_binlog_info

    This file stores the binary log file name and position that corresponds to the backup.

    This file also stores the value of the system variable that correspond to the backup, like this:

    The values in this file are only guaranteed to be consistent with the backup if the option was not provided when the backup was taken.

    mariadb_backup_binlog_pos_innodb

    This file is created by mariadb-backup to provide the binary log file name and position when the --no-lock option is used. It can be used instead of the xtrabackup_binlog_info file to obtain transactionally consistent binlog coordinates from the backup of a master server with the --no-lock option to minimize the impact on a running server.

    Whenever a transaction is committed inside InnoDB when the binary log is enabled, the corresponding binlog coordinates are written to the InnoDB redo log along with the transaction commit. This allows one to restore the binlog coordinates corresponding to the last commit done by InnoDB along with a backup.

    The limitation of using xtrabackup_binlog_pos_innodb with the --no-lock option is that no DDL or modification of non-transactional tables should be done during the backup. If the last event in the binlog is a DDL/non-transactional update, the coordinates in the file xtrabackup_binlog_pos_innodb are too old. But as long as only InnoDB updates are done during the backup, the coordinates are correct.

    xtrabackup_binlog_pos_innodb

    This file is created by mariadb-backup to provide the binary log file name and position when the --no-lock option is used. It can be used instead of the xtrabackup_binlog_info file to obtain transactionally consistent binlog coordinates from the backup of a master server with the --no-lock option to minimize the impact on a running server.

    Whenever a transaction is committed inside InnoDB when the binary log is enabled, the corresponding binlog coordinates are written to the InnoDB redo log along with the transaction commit. This allows one to restore the binlog coordinates corresponding to the last commit done by InnoDB along with a backup.

    mariadb_backup_checkpoints

    The xtrabackup_checkpoints file contains metadata about the backup.

    For example:

    See below for a description of the fields.

    If the --extra-lsndir option is provided, then an extra copy of this file are saved in that directory.

    xtrabackup_checkpoints

    The xtrabackup_checkpoints file contains metadata about the backup.

    For example:

    See below for a description of the fields.

    If the --extra-lsndir option is provided, then an extra copy of this file are saved in that directory.

    backup_type

    If the backup is a non-prepared full backup or a non-prepared partial backup, then backup_type is set to full-backuped.

    If the backup is a non-prepared incremental backup, then backup_type is set to incremental.

    If the backup has already been prepared, then backup_type is set to log-applied.

    from_lsn

    If backup_type is full-backuped, then from_lsn has the value of 0.

    If backup_type is incremental, then from_lsn has the value of the log sequence number (LSN) at which the backup started reading from the InnoDB redo log. This is internally used by mariadb-backup when preparing incremental backups.

    This value can be manually set during an incremental backup with the --incremental-lsn option. However, it is generally better to let mariadb-backup figure out the from_lsn automatically by specifying a parent backup with the --incremental-basedir option.

    to_lsn

    to_lsn has the value of the log sequence number (LSN) of the last checkpoint in the InnoDB redo log. This is internally used by mariadb-backup when preparing incremental backups.

    last_lsn

    last_lsn has the value of the last log sequence number (LSN) read from the InnoDB redo log. This is internally used by mariadb-backup when preparing incremental backups.

    mariadb_backup_info

    Contains information about the backup. The fields in this file are listed below.

    If the --extra-lsndir option is provided, an extra copy of this file is saved in that directory.

    xtrabackup_info

    Contains information about the backup. The fields in this file are listed below.

    If the --extra-lsndir option is provided, an extra copy of this file is saved in that directory.

    uuid

    If a UUID was provided by the --incremental-history-uuid option, then it are saved here. Otherwise, this is the empty string.

    name

    If a name was provided by the --history or the ---incremental-history-name options, then it are saved here. Otherwise, this is the empty string.

    tool_name

    The name of the mariadb-backup executable that performed the backup. This is generally mariadb-backup.

    tool_command

    The arguments that were provided to mariadb-backup when it performed the backup.

    tool_version

    The version of mariadb-backup that performed the backup.

    ibbackup_version

    The version of mariadb-backup that performed the backup.

    server_version

    The version of MariaDB Server that was backed up.

    start_time

    The time that the backup started.

    end_time

    The time that the backup ended.

    lock_time

    The amount of time that mariadb-backup held its locks.

    binlog_pos

    This field stores the binary log file name and position that corresponds to the backup.

    This field also stores the value of the gtid_current_pos system variable that correspond to the backup.

    The values in this field are only guaranteed to be consistent with the backup if the --no-lock option was not provided when the backup was taken.

    innodb_from_lsn

    This is identical to from_lsn in xtrabackup_checkpoints.

    If the backup is a full backup, then innodb_from_lsn has the value of 0.

    If the backup is an incremental backup, then innodb_from_lsn has the value of the log sequence number (LSN) at which the backup started reading from the InnoDB redo log.

    innodb_to_lsn

    This is identical to to_lsn in xtrabackup_checkpoints.

    innodb_to_lsn has the value of the log sequence number (LSN) of the last checkpoint in the InnoDB redo log.

    partial

    If the backup is a partial backup, then this value are Y.

    Otherwise, this value are N.

    incremental

    If the backup is an incremental backup, then this value are Y.

    Otherwise, this value are N.

    format

    This field's value is the format of the backup.

    If the --stream option was set to xbstream, then this value are xbstream.

    If the --stream option was not provided, then this value are file.

    compressed

    If the --compress option was provided, then this value are compressed.

    Otherwise, this value are N.

    mariadb_backup_slave_info

    If the --slave-info option is provided, this file contains the CHANGE MASTER command that can be used to set up a new server as a slave of the original server's master after the backup has been restored.

    mariadb-backup does not check if GTIDs are being used in replication. It takes a shortcut and assumes that if the gtid_slave_pos system variable is non-empty, then it writes the CHANGE MASTER command with the MASTER_USE_GTID option set to slave_pos. Otherwise, it writes the CHANGE MASTER command with the MASTER_LOG_FILE and MASTER_LOG_POS options using the master's binary log file and position. See for more information.

    xtrabackup_slave_info

    If the --slave-info option is provided, this file contains the CHANGE MASTER command that can be used to set up a new server as a slave of the original server's master after the backup has been restored.

    mariadb-backup does not check if GTIDs are being used in replication. It takes a shortcut and assumes that if the system variable is non-empty, then it writes the CHANGE MASTER command with the

    mariadb_backup_galera_info

    If the --galera-info option is provided, this file contains information about a Galera Cluster node's state.

    The file contains the values of the and status variables.

    The values are written in the following format:

    For example:

    xtrabackup_galera_info

    If the --galera-info option is provided, this file contains information about a Galera Cluster node's state.

    The file contains the values of the and status variables.

    The values are written in the following format:

    For example:

    <table>.delta

    If the backup is an incremental backup, this file contains changed pages for the table.

    <table>.delta.meta

    If the backup is an incremental backup, this file contains metadata about <table>.delta files. The fields in this file are listed below.

    page_size

    This field contains either the value of innodb_page_size or the value of the KEY_BLOCK_SIZE table option for the table if the ROW_FORMAT table option for the table is set to COMPRESSED.

    zip_size

    If the ROW_FORMAT table option for this table is set to COMPRESSED, this field contains the value of the compressed page size.

    space_id

    This field contains the value of the table's space_id.

    This page is licensed: CC BY-SA / Gnu FDL

    mariadb-backup was previously called mariabackup.

    Memory
    MyISAM
    MyRocks
    S3
    Spider

    Getting Data from MariaDB

    This guide explains how to retrieve data from MariaDB using the SELECT statement, progressing from basic syntax to more involved queries.

    The simplest way to retrieve data from MariaDB is to use the SELECT statement. Since the SELECT statement is an essential SQL statement, it has many options available with it. It's not necessary to know or use them all—you could execute very basic SELECT statements if that satisfies your needs. However, as you use MariaDB more, you may need more powerful SELECT statements. In this article we will go through the basics of SELECT and will progress to more involved SELECT statements;we will move from the beginner level to the more intermediate and hopefully you will find some benefit from this article regardless of your skill level. For absolute beginners who are just starting with MariaDB, you may want to read the MariaDB Basics article.

    Creating and Populating the Tables

    In order to follow the examples that follow, you can create and pre-populate the tables, as follows:

    If you are unclear what these statements do, review the and tutorials.

    Basic Elements

    The basic, minimal elements of the statement call for the keyword SELECT, of course, the columns to select or to retrieve, and the table from which to retrieve rows of data. Actually, for the columns to select, we can use the asterisk as a wildcard to select all columns in a particular table. Using a database from a fictitious bookstore, we might enter the following SQL statement to get a list of all columns and rows in a table containing information on books:

    This will retrieve all of the data contained in the books table. As you can see, not all values have been populated. If we want to retrieve only certain columns, we would list them in place of the asterisk in a comma-separated list like so:

    This narrows the width of the results set by retrieving only three columns, but it still retrieves all of the rows in the table. If the table contains thousands of rows of data, this may be more data than we want. If we want to limit the results to just a few books, say five, we would include what is known as a clause:

    This will give us the first five rows found in the table. If we want to get the next ten found, we would add a starting point parameter just before the number of rows to display, separated by a comma:

    In this case, there are only three further records.

    Selectivity and Order

    The previous statements have narrowed the number of columns and rows retrieved, but they haven't been very selective. Suppose that we want only books written by a certain author, say Dostoevsky. Looking in the authors table we find that his author identification number is 2. Using a WHERE clause, we can retrieve a list of books from the database for this particular author like so:

    I removed the author_id from the list of columns to select, but left the basic clause in because we want to point out that the syntax is fairly strict on ordering of clauses and flags. You can't enter them in any order. You'll get an error in return.

    The SQL statements we've looked at thus far will display the titles of books in the order in which they're found in the database. If we want to put the results in alphanumeric order based on the values of the title column, for instance, we would add an clause like this:

    Notice that the clause goes after the WHERE clause and before the clause. Not only will this statement display the rows in order by book title, but it will retrieve only the first five based on the ordering. That is to say, MariaDB will first retrieve all of the rows based on the WHERE clause, order the data based on the ORDER BY clause, and then display a limited number of rows based on the clause. Hence the reason for the order of clauses. You may have noticed that we slipped in the ASC flag. It tells MariaDB to order the rows in ascending order for the column name it follows. It's not necessary, though, since ascending order is the default. However, if we want to display data in descending order, we would replace the flag with DESC. To order by more than one column, additional columns may be given in the ORDER BY clause in a comma separated list, each with the ASC or DESC flags if preferred.

    Friendlier and More Complicated

    So far we've been working with one table of data containing information on books for a fictitious bookstore. A database will usually have more than one table, of course. In this particular database, there's also one called authors in which the name and other information on authors is contained. To be able to select data from two tables in one statement, we will have to tell MariaDB that we want to join the tables and will need to provide a join point. This can be done with a clause as shown in the following SQL statement, with the results following it:

    Our statement is getting hefty, but it's the same one to which we've been adding. Don't let the clutter fluster you. Looking for the new elements, let's focus on the clause first. There are a few possible ways to construct a join. This method works if both tables contain a column of the same name and value. Otherwise you'll have to redo the JOIN clause to look something like this:

    This excerpt is based on the assumption that the key field in the authors table is not called author_id, but row_id instead. There's much more that can be said about joins, but that would make for a much longer article. If you want to learn more on joins, look at MariaDB's documentation page on JOIN syntax.

    Looking again at the last full SQL statement above, you must have spotted the function that we added to the on-going example statement. This string function takes the values of the columns and strings given and pastes them together, to give one neat field in the results. We also employed the AS parameter to change the heading of the results set for the field to author. This is much tidier. Since we joined the books and the authors tables together, we were able to search for books based on the author's last name rather than having to look up the author ID first. This is a much friendlier method, albeit more complicated. Incidentally, we can have MariaDB check columns from both tables to narrow our search. We would just add column = value pairs, separated by commas in the WHERE clause. Notice that the string containing the author's name is wrapped in quotes—otherwise, the string would be considered a column name and we'd get an error.

    The name Dostoevsky is sometimes spelled Dostoevskii, as well as a few other ways. If we're not sure how it's spelled in the authors table, we could use the operator instead of the equal sign, along with a wildcard. If we think the author's name is probably spelled either of the two ways mentioned, we could enter something like this:

    This will match any author last name starting with Dostoevsk. Notice that the wildcard here is not an asterisk, but a percent-sign.

    Some Flags

    There are many flags or parameters that can be used in a statement. To list and explain all of them with examples would make this a very lengthy article. The reality is that most people never use some of them anyway. So, let's take a look at a few that you may find useful as you get more involved with MariaDB or if you work with large tables on very active servers.

    The first flag that may be given, it goes immediately after the SELECT keyword, is ALL. By default, all rows that meet the requirements of the various clauses given are selected, so this isn't necessary. If instead we would only want the first occurrence of a particular criteria to be displayed, we could add the option. For instance, for authors like Dostoevsky there are several printings of a particular title. In the results shown earlier you may have noticed that there were two copies of Crime & Punishment listed, however they have different ISBN numbers and different publishers. Suppose that for our search we only want one row displayed for each title. We could do that like so:

    We've thinned out the ongoing SQL statement a bit for clarity. This statement will result in only one row displayed for Crime & Punishment and it are the first one found.

    If we're retrieving data from an extremely busy database, by default any other SQL statements entered simultaneously which are changing or updating data are executed before a statement. SELECT statements are considered to be of lower priority. However, if we would like a particular SELECT statement to be given a higher priority, we can add the keyword HIGH_PRIORITY. Modifying the previous SQL statement for this factor, we would enter it like this:

    You may have noticed in the one example earlier in which the results are shown, that there's a status line displayed that specifies the number of rows in the results set. This is less than the number of rows that were found in the database that met the statement's criteria. It's less because we used a clause. If we add the flag just before the column list, MariaDB will calculate the number of columns found even if there is a LIMIT clause.

    To retrieve this information, though, we will have to use the function like so:

    This value is temporary and are lost if the connection is terminated. It cannot be retrieved by any other client session. It relates only to the current session and the value for the variable when it was last calculated.

    Conclusion

    There are several more parameters and possibilities for the statement that we had to skip to keep this article a reasonable length. A popular one that we left out is the clause for calculating aggregate data for columns (e.g., an average). There are several flags for caching results and a clause for exporting a results set to a text file. If you would like to learn more about and all of the options available, look at the on-line documentation for statements.

    This page is licensed: CC BY-SA / Gnu FDL

    Partition Maintenance

    Discover administrative tasks for managing partitions, such as adding, dropping, reorganizing, and coalescing them to keep your data optimized.

    Overview

    This article covers:

    • Partitioning best practices.

    • How to maintain a time-series-partitioned table.

    • AUTO_INCREMENT secrets.

    General partitioning advice, taken from :

    1. Don't use until you know how and why it will help.

    2. Don't use PARTITION unless you will have more than a million rows to handle.

    3. No more than 50 PARTITIONs on a table (open, show table status, etc, are impacted).

    4. PARTITION BY RANGE is the only useful method.

    • Subpartitions are not useful.

    • The partition field should not be the first field in any key.

    • It is okay to have an as the first part of a compound key, or in a nonunique index.

    It is so tempting to believe that PARTITIONing will solve performance problems. But it is so often wrong.

    PARTITIONing splits up one table into several smaller tables. But table size is rarely a performance issue. Instead, I/O time and indexes are the issues.

    A common fallacy: "Partitioning will make my queries run faster". It won't. Ponder what it takes for a 'point query'. Without partitioning, but with an appropriate index, there is a BTree (the index) to drill down to find the desired row. For a billion rows, this might be 5 levels deep. With partitioning, first the partition is chosen and "opened", then a smaller BTree (of say 4 levels) is drilled down. Well, the savings of the shallower BTree is consumed by having to open the partition. Similarly, if you look at the disk blocks that need to be touched, and which of those are likely to be cached, you come to the conclusion that about the same number of disk hits is likely. Since disk hits are the main cost in a query, Partitioning does not gain any performance (at least for this typical case). The 2D case (below) gives the main contradiction to this discussion.

    Use Cases for PARTITIONing

    Use case #1 -- time series. Perhaps the most common use case where PARTITIONing shines is in a dataset where "old" data is periodically deleted from the table. RANGE PARTITIONing by day (or other unit of time) lets you do a nearly instantaneous DROP PARTITION plus REORGANIZE PARTITION instead of a much slower DELETE. Much of this blog is focused on this use case. This use case is also discussed in

    The big win for Case #1: DROP PARTITION is a lot faster than DELETEing a lot of rows.

    Use case #2 -- 2-D index. INDEXes are inherently one-dimensional. If you need two "ranges" in the WHERE clause, try to migrate one of them to PARTITIONing.

    Finding the nearest 10 pizza parlors on a map needs a 2D index. Partition pruning sort of gives a second dimension. See Latitude/Longitude Indexing That uses PARTITION BY RANGE(latitude) together with PRIMARY KEY(longitude, ...)

    The big win for Case #2: Scanning fewer rows.

    Use case #3 -- hot spot. This is a bit complicated to explain. Given this combination:

    • A table's index is too big to be cached, but the index for one partition is cacheable, and

    • The index is randomly accessed, and

    • Data ingestion would normally be I/O bound due to updating the index Partitioning can keep all the index "hot" in RAM, thereby avoiding a lot of I/O.

    The big win for Case #3: Improving caching to decrease I/O to speed up operations.

    AUTO_INCREMENT in PARTITION

    • For to work (in any table), it must be the first field in some index. Period. There are no other requirements on indexing it.

    • Being the first field in some index lets the engine find the 'next' value when opening the table.

    • AUTO_INCREMENT need not be UNIQUE. What you lose: prevention of explicitly inserting a duplicate id. (This is rarely needed, anyway.)

    Examples (where id is AUTO_INCREMENT):

    • PRIMARY KEY (...), INDEX(id)

    • PRIMARY KEY (...), UNIQUE(id, partition_key) -- not useful

    • INDEX(id), INDEX(...) (but no UNIQUE keys)

    • PRIMARY KEY(id), ... -- works only if id is the partition key (not very useful)

    PARTITION Maintenance for the Time-Series Case

    Let's focus on the maintenance task involved in Case #1, as described above.

    You have a large table that is growing on one end and being pruned on the other. Examples include news, logs, and other transient information. PARTITION BY RANGE is an excellent vehicle for such a table.

    • DROP PARTITION is much faster than DELETE. (This is the big reason for doing this flavor of partitioning.)

    • Queries often limit themselves to 'recent' data, thereby taking advantage of "partition pruning".

    Depending on the type of data, and how long before it expires, you might have daily or weekly or hourly (etc) partitions.

    There is no simple SQL statement to "drop partitions older than 30 days" or "add a new partition for tomorrow". It would be tedious to do this by hand every day.

    High-Level View of the Code

    After which you have...

    Why?

    Perhaps you noticed some odd things in the example. Let me explain them.

    • Partition naming: Make them useful.

    • from20120415 ... 04-16: Note that the LESS THAN is the next day's date

    • The "start" partition: See paragraph below.

    • The "future" partition: This is normally empty, but it can catch overflows; more later.

    Why the bogus "start" partition? If an invalid datetime (Feb 31) were to be used, the datetime would turn into NULL. NULLs are put into the first partition. Since any SELECT could have an invalid date (yeah, this stretching things), the partition pruner always includes the first partition in the resulting set of partitions to search. So, if the SELECT must scan the first partition, it would be slightly more efficient if that partition were empty. Hence the bogus "start" partition. Longer discussion, by The Data Charmer 5.5 eliminates the bogus check, but only if you switch to a new syntax:

    More on the "future" partition. Sooner or later the cron/EVENT to add tomorrow's partition will fail to run. The worst that could happen is for tomorrow's data to be lost. The easiest way to prevent that is to have a partition ready to catch it, even if this partition is normally always empty.

    Having the "future" partition makes the ADD PARTITION script a little more complex. Instead, it needs to take tomorrow's data from "future" and put it into a new partition. This is done with the REORGANIZE command shown. Normally nothing need be moved, and the ALTER takes virtually zero time.

    When to do the ALTERs?

    • DROP if the oldest partition is "too old".

    • Add 'tomorrow' near the end of today, but don't try to add it twice.

    • Do not count partitions -- there are two extra ones. Use the partition names or information_schema.PARTITIONS.PARTITION_DESCRIPTION.

    • DROP/Add only once in the script. Rerun the script if you need more.

    Variants

    As I have said many times, in many places, BY RANGE is perhaps the only useful variant. And a time series is the most common use for PARTITIONing.

    • (as discussed here) DATETIME/DATE with TO_DAYS()

    • DATETIME/DATE with TO_DAYS(), but with 7-day intervals

    • TIMESTAMP with TO_DAYS(). (version 5.1.43 or later)

    • PARTITION BY RANGE COLUMNS(DATETIME) (5.5.0)

    How many partitions?

    • Under, say, 5 partitions -- you get very little of the benefits.

    • Over, say, 50 partitions, and you hit inefficiencies elsewhere.

    • Certain operations (SHOW TABLE STATUS, opening the table, etc) open every partition.

    • , before version 5.6.6, would lock all partitions before pruning!

    Detailed Code

    The complexity of the code is in the discovery of the PARTITION names, especially of the oldest and the 'next'.

    To run the demo,

    • Install Perl and DBIx::DWIW (from CPAN).

    • copy the txt file (link above) to demo_part_maint.pl

    • execute perl demo_part_maint.pl to get the rest of the instructions

    The program will generate and execute (when needed) either of these:

    Postlog

    The tips in this document apply to MySQL, MariaDB, and Percona.

    Future (as envisioned in 2016):

    • MySQL 5.7.6 has "native partitioning for InnoDB".

    • FOREIGN KEY support, perhaps in a later 8.0.xx.

    • "GLOBAL INDEX" -- this would avoid the need for putting the partition key in every unique index, but make DROP PARTITION costly. This is farther into the future.

    MySQL 8.0, released Sep, 2016, not yet GA)

    • Only InnoDB tables can be partitioned -- MariaDB is likely to continue maintaining Partitioning on non-InnoDB tables, but Oracle is clearly not.

    • Some of the problems having lots of partitions are lessened by the Data-Dictionary-in-a-table.

    Native partitioning will give:

    • This will improve performance slightly by combining two "handlers" into one.

    • Decreased memory usage, especially when using a large number of partitions.

    See Also

    Rick James graciously allowed us to use this article in the documentation. has other useful tips, how-tos, optimizations, and debugging tips. Original source:

    This page is licensed: CC BY-SA / Gnu FDL

    Views

    Discover how to create and manage views in MariaDB to simplify complex queries, restrict data access, and provide an abstraction layer over tables.

    A Tutorial Introduction

    Up-front warning: This is the beginning of a very basic tutorial on views, based on my experimentation with them. This tutorial assumes that you've read the appropriate tutorials up to and including More Advanced Joins (or that you understand the concepts behind them). This page is intended to give you a general idea of how views work and what they do, as well as some examples of when you could use them.

    Requirements for This Tutorial

    In order to perform the SQL statements in this tutorial, you will need access to a MariaDB database and you will need the CREATE TABLE and CREATE VIEW privileges on this table.

    The Employee Database

    First, we need some data we can perform our optimizations on, so we'll recreate the tables from the tutorial, to provide us with a starting point. If you have already completed that tutorial and have this database already, you can skip ahead.

    First, we create the table that will hold all of the employees and their contact information:

    Next, we add a few employees to the table:

    Now, we create a second table, containing the hours which each employee clocked in and out during the week:

    Finally, although it is a lot of information, we add a full week of hours for each of the employees into the second table that we created:

    Working with the Employee Database

    In this example, we are going to assist Human Resources by simplifying the queries that their applications need to perform. At the same time, it's going to enable us to abstract their queries from the database, which allows us more flexibility in maintaining it.

    Filtering by Name, Date and Time

    In the previous tutorial, we looked at a JOIN query that displayed all of the lateness instances for a particular employee. In this tutorial, we are going to abstract that query somewhat to provide us with all lateness occurrences for all employees, and then standardize that query by making it into a view.

    Our previous query looked like this:

    The result:

    Refining Our Query

    The previous example displays to us all of Heimholtz's punch-in times that were after seven AM. We can see here that Heimholz has been late twice within this reporting period, and we can also see that in both instances, he either left exactly on time or he left early. Our company policy, however, dictates that late instances must be made up at the end of one's shift, so we want to exclude from our report anyone whose clock-out time was greater than 10 hours and one minute after their clock-in time.

    This gives us the following list of people who have violated our attendance policy:

    The Utility of Views

    We can see in the previous example that there have been several instances of employees coming in late and leaving early. Unfortunately, we can also see that this query is getting needlessly complex. Having all of this SQL in our application not only creates more complex application code, but also means that if we ever change the structure of this table we're going to have to change what is becoming a somewhat messy query. This is where views begin to show their usefulness.

    Creating the Employee Tardiness View

    Creating a view is almost exactly the same as creating a SELECT statement, so we can use our previous SELECT statement in the creation of our new view:

    Note that the first line of our query contains the statement 'SQL SECURITY INVOKER' - this means that when the view is accessed, it runs with the same privileges that the person accessing the view has. Thus, if someone without access to our Employees table tries to access this view, they will get an error.

    Other than the security parameter, the rest of the query is fairly self explanatory. We simply run 'CREATE VIEW AS' and then append any valid SELECT statement, and our view is created. Now if we do a SELECT from the view, we can see we get the same results as before, with much less SQL:

    Now we can even perform operations on the table, such as limiting our results to just those with a Difference of at least five minutes:

    Other Uses of Views

    Aside from just simplifying our application's SQL queries, there are also other benefits that views can provide, some of which are only possible by using views.

    Restricting Data Access

    For example, even though our Employees database contains fields for Position, Home Address, and Home Phone, our query does not allow for these fields to be shown. This means that in the case of a security issue in the application (for example, an SQL injection attack, or even a malicious programmer), there is no risk of disclosing an employee's personal information.

    Row-level Security

    We can also define separate views to include a specific WHERE clause for security; for example, if we wanted to restrict a department head's access to only the staff that report to him, we could specify his identity in the view's CREATE statement, and he would then be unable to see any other department's employees, despite them all being in the same table. If this view is writeable and it is defined with the CASCADE clause, this restriction will also apply to writes. This is actually the only way to implement row-level security in MariaDB, so views play an important part in that area as well.

    Pre-emptive Optimization

    We can also define our views in such a way as to force the use of indexes, so that other, less-experienced developers don't run the risk of running un-optimized queries or JOINs that result in full-table scans and extended locks. Expensive queries, queries that SELECT *, and poorly thought-out JOINs can not only slow down the database entirely, but can cause inserts to fail, clients to time out, and reports to error out. By creating a view that is already optimized and letting users perform their queries on that, you can ensure that they won't cause a significant performance hit unnecessarily.

    Abstracting Tables

    When we re-engineer our application, we sometimes need to change the database to optimize or accommodate new or removed features. We may, for example, want to our tables when they start getting too large and queries start taking too long. Alternately, we may be installing a new application with different requirements alongside a legacy application. Unfortunately, database redesign will tend to break backwards-compatibility with previous applications, which can cause obvious problems.

    Using views, we can change the format of the underlying tables while still presenting the same table format to the legacy application. Thus, an application which demands username, hostname, and access time in string format can access the same data as an application which requires firstname, lastname, user@host, and access time in Unix timestamp format.

    Summary

    Views are an SQL feature that can provide a lot of versatility in larger applications, and can even simplify smaller applications further. Just as stored procedures can help us abstract out our database logic, views can simplify the way we access data in the database, and can help un-complicate our queries to make application debugging easier and more efficient.

    The initial version of this article was copied, with permission, from on 2012-10-05.

    This page is licensed: CC BY-SA / Gnu FDL

    CREATE PROCEDURE

    The CREATE PROCEDURE statement defines a new stored procedure, specifying its name, parameters (IN, OUT, INOUT), and the SQL statements it executes.

    Syntax

    CREATE
        [OR REPLACE]
        [DEFINER = { user | CURRENT_USER | role | CURRENT_ROLE }]
        PROCEDURE [IF NOT EXISTS] sp_name ([proc_parameter[,...]])
        [characteristic ...] routine_body
    
    proc_parameter:
        [ IN | OUT | INOUT ] param_name type [DEFAULT value or expression]
    
    type:
        Any valid MariaDB data type
    
    

    Description

    Creates a . By default, a routine is associated with the default database. To associate the routine explicitly with a given database, specify the name as db_name.sp_name when you create it.

    When the routine is invoked, an implicit USE`` db_name is performed (and undone when the routine terminates). The causes the routine to have the given default database while it executes. USE statements within stored routines are disallowed.

    When a stored procedure has been created, you invoke it by using the CALL statement (see ).

    To execute the CREATE PROCEDURE statement, it is necessary to have the CREATE ROUTINE privilege. By default, MariaDB automatically grants the ALTER ROUTINE and EXECUTE privileges to the routine creator. See also .

    The DEFINER and SQL SECURITY clauses specify the security context to be used when checking access privileges at routine execution time, as described . Requires the privilege.

    If the routine name is the same as the name of a built-in SQL function, you must use a space between the name and the following parenthesis when defining the routine, or a syntax error occurs. This is also true when you invoke the routine later. For this reason, we suggest that it is better to avoid re-using the names of existing SQL functions for your own stored routines.

    The IGNORE_SPACE SQL mode applies to built-in functions, not to stored routines. It is always allowable to have spaces after a routine name, regardless of whether IGNORE_SPACE is enabled.

    The parameter list enclosed within parentheses must always be present. If there are no parameters, an empty parameter list of () should be used. Parameter names are not case sensitive.

    Each parameter can be declared to use any valid data type, except that the COLLATE attribute cannot be used.

    For valid identifiers to use as procedure names, see .

    Things to be Aware of With CREATE OR REPLACE

    • One can't use OR REPLACE together with IF EXISTS.

    CREATE PROCEDURE IF NOT EXISTS

    If the IF NOT EXISTS clause is used, then the procedure will only be created if a procedure with the same name does not already exist. If the procedure already exists, then a warning are triggered by default.

    IN/OUT/INOUT/IN OUT

    Each parameter is an IN parameter by default. To specify otherwise for a parameter, use the keyword OUT or INOUT before the parameter name.

    An IN parameter passes a value into a procedure. The procedure might modify the value, but the modification is not visible to the caller when the procedure returns. An OUT parameter passes a value from the procedure back to the caller. Its initial value is NULL within the procedure, and its value is visible to the caller when the procedure returns. An INOUT parameter is initialized by the caller, can be modified by the procedure, and any change made by the procedure is visible to the caller when the procedure returns.

    For each OUT or INOUT parameter, pass a user-defined variable in theCALL statement that invokes the procedure so that you can obtain its value when the procedure returns. If you are calling the procedure from within another stored procedure or function, you can also pass a routine parameter or local routine variable as an IN or INOUT parameter.

    DEFAULT value or expression

    As of , each parameter can be defined as having a default value or expression. This can be useful if needing to add extra parameters to a procedure which is already in use.

    DETERMINISTIC/NOT DETERMINISTIC

    DETERMINISTIC and NOT DETERMINISTIC apply only to . Specifying DETERMINISTC or NON-DETERMINISTIC in procedures has no effect. The default value is NOT DETERMINISTIC. Functions are DETERMINISTIC when they always return the same value for the same input. For example, a truncate or substring function. Any function involving data, therefore, is always NOT DETERMINISTIC.

    CONTAINS SQL/NO SQL/READS SQL DATA/MODIFIES SQL DATA

    CONTAINS SQL, NO SQL, READS SQL DATA, and MODIFIES SQL DATA are informative clauses that tell the server what the function does. MariaDB does not check in any way whether the specified clause is correct. If none of these clauses are specified, CONTAINS SQL is used by default.

    MODIFIES SQL DATA means that the function contains statements that may modify data stored in databases. This happens if the function contains statements like , , , or DDL.

    READS SQL DATA means that the function reads data stored in databases but does not modify any data. This happens if statements are used, but there no write operations are executed.

    CONTAINS SQL means that the function contains at least one SQL statement, but it does not read or write any data stored in a database. Examples include or .

    NO SQL means nothing, because MariaDB does not currently support any language other than SQL.

    The routine_body consists of a valid SQL procedure statement. This can be a simple statement such as or , or it can be a compound statement written using . Compound statements can contain declarations, loops, and other control structure statements. See for syntax details.

    MariaDB allows routines to contain DDL statements, such as CREATE and DROP. MariaDB also allows (but not ) to contain SQL transaction statements such as COMMIT.

    For additional information about statements that are not allowed in stored routines, see .

    Invoking stored procedure from within programs

    For information about invoking from within programs written in a language that has a MariaDB/MySQL interface, see .

    OR REPLACE

    If the optional OR REPLACE clause is used, it acts as a shortcut for the following statements, with the exception that any existing for the procedure are not dropped:

    sql_mode

    MariaDB stores the system variable setting that is in effect at the time a routine is created and always executes the routine with this setting in force, regardless of the server in effect when the routine is invoked.

    Character Sets and Collations

    Procedure parameters can be declared with any character set/collation. If the character set and collation are not specifically set, the database defaults at the time of creation are used. If the database defaults change at a later stage, the stored procedure character set/collation will not be changed at the same time; the stored procedure needs to be dropped and recreated to ensure the same character set/collation as the database is used.

    Oracle Mode

    A subset of Oracle's PL/SQL language is supported in addition to the traditional SQL/PSM-based MariaDB syntax. See for details on changes when running Oracle mode.

    Examples

    The following example shows a simple stored procedure that uses an OUT parameter. It uses the DELIMITER command to set a new delimiter for the duration of the process — see .

    Character set and collation:

    CREATE OR REPLACE:

    See Also

    This page is licensed: CC BY-SA / Gnu FDL

    Aria file version: 1
    Block size: 8192
    maria_uuid: ee948482-6cb7-11ed-accb-3c7c3ff16468
    last_checkpoint_lsn: (1,0x235a)
    last_log_number: 1
    trid: 28
    recovery_failures: 0
    CREATE TABLE `Employees` (
      `ID` TINYINT(3) UNSIGNED NOT NULL AUTO_INCREMENT,
      `First_Name` VARCHAR(25) NOT NULL,
      `Last_Name` VARCHAR(25) NOT NULL,
      `Position` VARCHAR(25) NOT NULL,
      `Home_Address` VARCHAR(50) NOT NULL,
      `Home_Phone` VARCHAR(12) NOT NULL,
      PRIMARY KEY (`ID`)
    );
    ALTER TABLE Employees ADD PRIMARY KEY(ID);
    SELECT t.TABLE_SCHEMA, t.TABLE_NAME
    FROM information_schema.TABLES AS t
    LEFT JOIN information_schema.KEY_COLUMN_USAGE AS c 
    ON t.TABLE_SCHEMA = c.CONSTRAINT_SCHEMA
       AND t.TABLE_NAME = c.TABLE_NAME
       AND c.CONSTRAINT_NAME = 'PRIMARY'
    WHERE t.TABLE_SCHEMA != 'information_schema'
       AND t.TABLE_SCHEMA != 'performance_schema'
       AND t.TABLE_SCHEMA != 'mysql'
       AND c.CONSTRAINT_NAME IS NULL;
    CREATE TABLE `Employees` (
      `ID` TINYINT(3) UNSIGNED NOT NULL,
      `First_Name` VARCHAR(25) NOT NULL,
      `Last_Name` VARCHAR(25) NOT NULL,
      `Position` VARCHAR(25) NOT NULL,
      `Home_Address` VARCHAR(50) NOT NULL,
      `Home_Phone` VARCHAR(12) NOT NULL,
      `Employee_Code` VARCHAR(25) NOT NULL,
      PRIMARY KEY (`ID`),
      UNIQUE KEY (`Employee_Code`)
    );
    ALTER TABLE Employees ADD UNIQUE `EmpCode`(`Employee_Code`);
    CREATE UNIQUE INDEX HomePhone ON Employees(Home_Phone);
    CREATE TABLE t1 (a INT NOT NULL, b INT, UNIQUE (a,b));
    
    INSERT INTO t1 VALUES (1,1), (2,2);
    
    SELECT * FROM t1;
    +---+------+
    | a | b    |
    +---+------+
    | 1 |    1 |
    | 2 |    2 |
    +---+------+
    INSERT INTO t1 VALUES (2,1);
    
    SELECT * FROM t1;
    +---+------+
    | a | b    |
    +---+------+
    | 1 |    1 |
    | 2 |    1 |
    | 2 |    2 |
    +---+------+
    INSERT INTO t1 VALUES (3,NULL), (3, NULL);
    
    SELECT * FROM t1;
    +---+------+
    | a | b    |
    +---+------+
    | 1 |    1 |
    | 2 |    1 |
    | 2 |    2 |
    | 3 | NULL |
    | 3 | NULL |
    +---+------+
    SELECT (3, NULL) = (3, NULL);
    
    +---------------------- +
    | (3, NULL) = (3, NULL) |
    +---------------------- +
    | 0                     |
    +---------------------- +
    CREATE TABLE Table_1 (
      user_name VARCHAR(10),
      status ENUM('Active', 'ON-Hold', 'Deleted'),
      del CHAR(0) AS (IF(status IN ('Active', 'ON-Hold'),'', NULL)) persistent,
      UNIQUE(user_name,del)
    )
    CREATE TABLE t1 (a INT PRIMARY KEY,
    b BLOB,
    c1 VARCHAR(1000),
    c2 VARCHAR(1000),
    c3 VARCHAR(1000),
    c4 VARCHAR(1000),
    c5 VARCHAR(1000),
    c6 VARCHAR(1000),
    c7 VARCHAR(1000),
    c8 VARCHAR(1000),
    c9 VARCHAR(1000),
    UNIQUE KEY `b` (b),
    UNIQUE KEY `all_c` (c1,c2,c3,c4,c6,c7,c8,c9)) ENGINE=myisam;
    SHOW CREATE TABLE t1\G
    *************************** 1. row ***************************
           TABLE: t1
    CREATE TABLE: CREATE TABLE `t1` (
      `a` INT(11) NOT NULL,
      `b` BLOB DEFAULT NULL,
      `c1` VARCHAR(1000) DEFAULT NULL,
      `c2` VARCHAR(1000) DEFAULT NULL,
      `c3` VARCHAR(1000) DEFAULT NULL,
      `c4` VARCHAR(1000) DEFAULT NULL,
      `c5` VARCHAR(1000) DEFAULT NULL,
      `c6` VARCHAR(1000) DEFAULT NULL,
      `c7` VARCHAR(1000) DEFAULT NULL,
      `c8` VARCHAR(1000) DEFAULT NULL,
      `c9` VARCHAR(1000) DEFAULT NULL,
      PRIMARY KEY (`a`),
      UNIQUE KEY `b` (`b`) USING HASH,
      UNIQUE KEY `all_c` (`c1`,`c2`,`c3`,`c4`,`c6`,`c7`,`c8`,`c9`) USING HASH
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_swedish_ci
    CREATE TABLE t2 (a INT NOT NULL, b INT, INDEX (a,b));
    
    INSERT INTO t2 VALUES (1,1), (2,2), (2,2);
    
    SELECT * FROM t2;
    +---+------+
    | a | b    |
    +---+------+
    | 1 |    1 |
    | 2 |    2 |
    | 2 |    2 |
    +---+------+
    backup_type = full-backuped
    from_lsn = 0
    to_lsn = 1635102
    last_lsn = 1635102
    recover_binlog_info = 0
    wsrep_local_state_uuid:wsrep_last_committed
    d38587ce-246c-11e5-bcce-6bbd0831cc0f:1352215
    CREATE
        [OR REPLACE]
        [DEFINER = { user | CURRENT_USER | role | CURRENT_ROLE }]
        PROCEDURE [IF NOT EXISTS] sp_name ([proc_parameter[,...]])
        [characteristic ...] routine_body
    
    proc_parameter:
        [ IN | OUT | INOUT ] param_name type
    
    type:
        Any valid MariaDB data type
    
    characteristic:
        LANGUAGE SQL
      | [NOT] DETERMINISTIC
      | { CONTAINS SQL | NO SQL | READS SQL DATA | MODIFIES SQL DATA }
      | SQL SECURITY { DEFINER | INVOKER }
      | COMMENT 'string'
    
    routine_body:
        Valid SQL procedure statement
    MariaDB Primer
    MariaDB Basics
    SELECT
    LIMIT
    LIMIT
    ORDER BY
    ORDER BY
    LIMIT
    LIMIT
    SELECT
    JOIN
    SELECT
    JOIN
    CONCAT()
    LIKE
    SELECT
    DISTINCT
    SELECT
    LIMIT
    SQL_CALC_FOUND_ROWS
    FOUND_ROWS()
    SELECT
    GROUP BY
    SELECT
    SELECT
    CREATE OR REPLACE TABLE books (
    isbn CHAR(20) PRIMARY KEY, 
    title VARCHAR(50),
    author_id INT,
    publisher_id INT,
    year_pub CHAR(4),
    description TEXT );
    
    CREATE OR REPLACE TABLE authors
    (author_id INT AUTO_INCREMENT PRIMARY KEY,
    name_last VARCHAR(50),
    name_first VARCHAR(50),
    country VARCHAR(50) );
    
    INSERT INTO authors (name_last, name_first, country) VALUES
      ('Kafka', 'Franz', 'Czech Republic'),
      ('Dostoevsky', 'Fyodor', 'Russia');
      
    INSERT INTO books (title, author_id, isbn, year_pub) VALUES
     ('The Trial', 1, '0805210407', '1995'),
     ('The Metamorphosis', 1, '0553213695', '1995'),
     ('America', 2, '0805210644', '1995'),
     ('Brothers Karamozov', 2, '0553212168', ''),
     ('Crime & Punishment', 2, '0679420290', ''),
     ('Crime & Punishment', 2, '0553211757', ''),
     ('Idiot', 2, '0192834118', ''),
     ('Notes from Underground', 2, '067973452X', '');
    SELECT * FROM books;
    +------------+------------------------+-----------+--------------+----------+-------------+
    | isbn       | title                  | author_id | publisher_id | year_pub | description |
    +------------+------------------------+-----------+--------------+----------+-------------+
    | 0192834118 | Idiot                  |         2 |         NULL |          | NULL        |
    | 0553211757 | Crime & Punishment     |         2 |         NULL |          | NULL        |
    | 0553212168 | Brothers Karamozov     |         2 |         NULL |          | NULL        |
    | 0553213695 | The Metamorphosis      |         1 |         NULL | 1995     | NULL        |
    | 0679420290 | Crime & Punishment     |         2 |         NULL |          | NULL        |
    | 067973452X | Notes from Underground |         2 |         NULL |          | NULL        |
    | 0805210407 | The Trial              |         1 |         NULL | 1995     | NULL        |
    | 0805210644 | America                |         2 |         NULL | 1995     | NULL        |
    +------------+------------------------+-----------+--------------+----------+-------------+
    8 rows in set (0.001 sec)
    SELECT isbn, title, author_id
    FROM books;
    +------------+------------------------+-----------+
    | isbn       | title                  | author_id |
    +------------+------------------------+-----------+
    | 0192834118 | Idiot                  |         2 |
    | 0553211757 | Crime & Punishment     |         2 |
    | 0553212168 | Brothers Karamozov     |         2 |
    | 0553213695 | The Metamorphosis      |         1 |
    | 0679420290 | Crime & Punishment     |         2 |
    | 067973452X | Notes from Underground |         2 |
    | 0805210407 | The Trial              |         1 |
    | 0805210644 | America                |         2 |
    +------------+------------------------+-----------+
    8 rows in set (0.001 sec)
    SELECT isbn, title, author_id
    FROM books 
    LIMIT 5;
    +------------+--------------------+-----------+
    | isbn       | title              | author_id |
    +------------+--------------------+-----------+
    | 0192834118 | Idiot              |         2 |
    | 0553211757 | Crime & Punishment |         2 |
    | 0553212168 | Brothers Karamozov |         2 |
    | 0553213695 | The Metamorphosis  |         1 |
    | 0679420290 | Crime & Punishment |         2 |
    +------------+--------------------+-----------+
    5 rows in set (0.001 sec)
    SELECT isbn, title, author_id
    FROM books 
    LIMIT 5, 10;
    +------------+------------------------+-----------+
    | isbn       | title                  | author_id |
    +------------+------------------------+-----------+
    | 067973452X | Notes from Underground |         2 |
    | 0805210407 | The Trial              |         1 |
    | 0805210644 | America                |         2 |
    +------------+------------------------+-----------+
    3 rows in set (0.001 sec)
    SELECT isbn, title
    FROM books
    WHERE author_id = 2
    LIMIT 5;
    +------------+------------------------+
    | isbn       | title                  |
    +------------+------------------------+
    | 0192834118 | Idiot                  |
    | 0553211757 | Crime & Punishment     |
    | 0553212168 | Brothers Karamozov     |
    | 0679420290 | Crime & Punishment     |
    | 067973452X | Notes from Underground |
    +------------+------------------------+
    5 rows in set (0.000 sec)
    SELECT isbn, title
    FROM books
    WHERE author_id = 2
    ORDER BY title ASC
    LIMIT 5;
    +------------+--------------------+
    | isbn       | title              |
    +------------+--------------------+
    | 0805210644 | America            |
    | 0553212168 | Brothers Karamozov |
    | 0553211757 | Crime & Punishment |
    | 0679420290 | Crime & Punishment |
    | 0192834118 | Idiot              |
    +------------+--------------------+
    5 rows in set (0.001 sec)
    SELECT isbn, title, 
    CONCAT(name_first, ' ', name_last) AS author
    FROM books
    JOIN authors USING (author_id)
    WHERE name_last = 'Dostoevsky'
    ORDER BY title ASC
    LIMIT 5;
    +------------+--------------------+-------------------+
    | isbn       | title              | author            |
    +------------+--------------------+-------------------+
    | 0805210644 | America            | Fyodor Dostoevsky |
    | 0553212168 | Brothers Karamozov | Fyodor Dostoevsky |
    | 0553211757 | Crime & Punishment | Fyodor Dostoevsky |
    | 0679420290 | Crime & Punishment | Fyodor Dostoevsky |
    | 0192834118 | Idiot              | Fyodor Dostoevsky |
    +------------+--------------------+-------------------+
    5 rows in set (0.00 sec)
    ...
    JOIN authors ON author_id = row_id
    ...
    SELECT isbn, title, 
    CONCAT(name_first, ' ', name_last) AS author
    FROM books
    JOIN authors USING (author_id)
    WHERE name_last LIKE 'Dostoevsk%'
    ORDER BY title ASC
    LIMIT 5;
    +------------+--------------------+-------------------+
    | isbn       | title              | author            |
    +------------+--------------------+-------------------+
    | 0805210644 | America            | Fyodor Dostoevsky |
    | 0553212168 | Brothers Karamozov | Fyodor Dostoevsky |
    | 0553211757 | Crime & Punishment | Fyodor Dostoevsky |
    | 0679420290 | Crime & Punishment | Fyodor Dostoevsky |
    | 0192834118 | Idiot              | Fyodor Dostoevsky |
    +------------+--------------------+-------------------+
    5 rows in set (0.001 sec)
    SELECT DISTINCT title
    FROM books
    JOIN authors USING (author_id)
    WHERE name_last = 'Dostoevsky'
    ORDER BY title;
    +------------------------+
    | title                  |
    +------------------------+
    | America                |
    | Brothers Karamozov     |
    | Crime & Punishment     |
    | Idiot                  |
    | Notes from Underground |
    +------------------------+
    SELECT DISTINCT HIGH_PRIORITY title
    FROM books
    JOIN authors USING (author_id)
    WHERE name_last = 'Dostoevsky'
    ORDER BY title;
    +------------------------+
    | title                  |
    +------------------------+
    | America                |
    | Brothers Karamozov     |
    | Crime & Punishment     |
    | Idiot                  |
    | Notes from Underground |
    +------------------------+
    SELECT SQL_CALC_FOUND_ROWS isbn, title
    FROM books
    JOIN authors USING (author_id)
    WHERE name_last = 'Dostoevsky'
    LIMIT 5;
    +------------+------------------------+
    | isbn       | title                  |
    +------------+------------------------+
    | 0192834118 | Idiot                  |
    | 0553211757 | Crime & Punishment     |
    | 0553212168 | Brothers Karamozov     |
    | 0679420290 | Crime & Punishment     |
    | 067973452X | Notes from Underground |
    +------------+------------------------+
    5 rows in set (0.001 sec)
    SELECT FOUND_ROWS();
    +--------------+
    | FOUND_ROWS() |
    +--------------+
    |            6 |
    +--------------+
    1 row in set (0.000 sec

    The limitation of using xtrabackup_binlog_pos_innodb with the --no-lock option is that no DDL or modification of non-transactional tables should be done during the backup. If the last event in the binlog is a DDL/non-transactional update, the coordinates in the file xtrabackup_binlog_pos_innodb are too old. But as long as only InnoDB updates are done during the backup, the coordinates are correct.

    MASTER_USE_GTID
    option set to
    slave_pos
    . Otherwise, it writes the
    CHANGE MASTER
    command with the
    MASTER_LOG_FILE
    and
    MASTER_LOG_POS
    options using the master's binary log file and position. See
    for more information.
    gtid_current_pos
    --no-lock
    MDEV-19264
    gtid_slave_pos
    mariadb-bin.000096 568 0-1-2
    MDEV-19264

    The range key (dt) must be included in any PRIMARY or UNIQUE key.

  • The range key (dt) should be last in any keys it is in -- You have already "pruned" with it; it is almost useless in the index, especially at the beginning.

  • DATETIME, etc -- I picked this datatype because it is typical for a time series. Newer MySQL versions allow TIMESTAMP. INT could be used; etc.

  • There is an extra day (03-16 thru 04-16): The latest day is only partially full.

  • Run the script more often than necessary. For daily partitions, run the script twice a day, or even hourly. Why? Automatic repair.

    PARTITION BY RANGE(TIMESTAMP) (version 5.5.15 / 5.6.3)

  • PARTITION BY RANGE(TO_SECONDS()) (5.6.0)

  • INT UNSIGNED with constants computed as unix timestamps.

  • INT UNSIGNED with constants for some non-time-based series.

  • MEDIUMINT UNSIGNED containing an "hour id": FLOOR(FROM_UNIXTIME(timestamp) / 3600)

  • Months, Quarters, etc: Concoct a notation that works.

  • Partition pruning does not happen on INSERTs (until Version 5.6.7), so INSERT needs to open all the partitions.

  • A possible 2-partition use case: read.php?24,633179,633179

  • 8192 partitions is a hard limit (1024 before ).

  • Before "native partitions" (5.7.6), each partition consumed a chunk of memory.

  • Rick's RoTs - Rules of Thumb
    PARTITIONing
    AUTO_INCREMENT
    Big DELETEs
    AUTO_INCREMENT
    MyISAM
    Reference implementation, in Perl, with demo of daily partitions
    Slides from Percona Amsterdam 2015
    More on PARTITIONing
    LinkedIn discussion
    Why NOT Partition
    Geoff Montee's Stored Proc
    Rick James' site
    partitionmaint
    SHOW CREATE PROCEDURE
  • SHOW PROCEDURE STATUS

  • Stored Routine Privileges

  • Information Schema ROUTINES Table

  • characteristic:
    LANGUAGE SQL
    | [NOT] DETERMINISTIC
    | { CONTAINS SQL | NO SQL | READS SQL DATA | MODIFIES SQL DATA }
    | SQL SECURITY { DEFINER | INVOKER }
    | COMMENT 'string'
    routine_body:
    Valid SQL procedure statement
    stored procedure
    CALL
    Stored Routine Privileges
    here
    SET USER
    Identifier Names
    functions
    DELETE
    UPDATE
    INSERT
    REPLACE
    SELECT
    SET
    DO
    SELECT
    INSERT
    BEGIN and END
    Programmatic and Compound Statements
    stored procedures
    stored functions
    Stored Routine Limitations
    stored procedures
    CALL
    privileges
    sql_mode
    SQL mode
    Delimiters in the mariadb client
    Identifier Names
    Stored Procedure Overview
    ALTER PROCEDURE
    DROP PROCEDURE
    More Advanced Joins
    Views_(Basic

    Doing Time with MariaDB

    Master date and time handling in MariaDB with this guide on temporal data types, current time functions, and formatting dates for display.

    The recording of date and time in a MariaDB database is a very common requirement. For gathering temporal data, one needs to know which type of columns to use in a table. More importantly is knowing how to record chronological data and how to retrieve it in various formats. Although this is a seemingly basic topic, there are many built-in time functions that can be used for more accurate SQL statements and better formatting of data. In this article we will explore these various aspects of how to do time with MariaDB.

    About Time

    Since date and time are only numeric strings, they can be stored in a regular character column. However, by using temporal data type columns, you can make use of several built-in functions offered by MariaDB. Currently, there are five temporal data types available: DATE, TIME

    backup_type = full-backuped
    from_lsn = 0
    to_lsn = 1635102
    last_lsn = 1635102
    recover_binlog_info = 0
    wsrep_local_state_uuid:wsrep_last_committed
    d38587ce-246c-11e5-bcce-6bbd0831cc0f:1352215
    ALTER TABLE tbl
        DROP PARTITION from20120314;
    ALTER TABLE tbl
        REORGANIZE PARTITION future INTO (
            PARTITION from20120415 VALUES LESS THAN (TO_DAYS('2012-04-16')),
            PARTITION future     VALUES LESS THAN MAXVALUE);
    CREATE TABLE tbl (
            dt DATETIME NOT NULL,  -- or DATE
            ...
            PRIMARY KEY (..., dt),
            UNIQUE KEY (..., dt),
            ...
        )
        PARTITION BY RANGE (TO_DAYS(dt)) (
            PARTITION START        VALUES LESS THAN (0),
            PARTITION from20120315 VALUES LESS THAN (TO_DAYS('2012-03-16')),
            PARTITION from20120316 VALUES LESS THAN (TO_DAYS('2012-03-17')),
            ...
            PARTITION from20120414 VALUES LESS THAN (TO_DAYS('2012-04-15')),
            PARTITION from20120415 VALUES LESS THAN (TO_DAYS('2012-04-16')),
            PARTITION future       VALUES LESS THAN MAXVALUE
        );
    PARTITION BY RANGE COLUMNS(dt) (
        PARTITION day_20100226 VALUES LESS THAN ('2010-02-27'), ...
    ALTER TABLE tbl REORGANIZE PARTITION
            future
       INTO (
            PARTITION from20150606 VALUES LESS THAN (736121),
            PARTITION future VALUES LESS THAN MAXVALUE
       )
    
       ALTER TABLE tbl
                        DROP PARTITION from20150603
    DROP PROCEDURE IF EXISTS name;
    CREATE PROCEDURE name ...;
    DELIMITER //
    
    CREATE PROCEDURE simpleproc (OUT param1 INT)
     BEGIN
      SELECT COUNT(*) INTO param1 FROM t;
     END;
    //
    
    DELIMITER ;
    
    CALL simpleproc(@a);
    
    SELECT @a;
    +------+
    | @a   |
    +------+
    |    1 |
    +------+
    DELIMITER //
    
    CREATE PROCEDURE simpleproc2 (
      OUT param1 CHAR(10) CHARACTER SET 'utf8' COLLATE 'utf8_bin'
    )
     BEGIN
      SELECT CONCAT('a'),f1 INTO param1 FROM t;
     END;
    //
    
    DELIMITER ;
    DELIMITER //
    
    CREATE PROCEDURE simpleproc2 (
      OUT param1 CHAR(10) CHARACTER SET 'utf8' COLLATE 'utf8_bin'
    )
     BEGIN
      SELECT CONCAT('a'),f1 INTO param1 FROM t;
     END;
    //
    ERROR 1304 (42000): PROCEDURE simpleproc2 already exists
    
    DELIMITER ;
    
    DELIMITER //
    
    CREATE OR REPLACE PROCEDURE simpleproc2 (
      OUT param1 CHAR(10) CHARACTER SET 'utf8' COLLATE 'utf8_bin'
    )
     BEGIN
      SELECT CONCAT('a'),f1 INTO param1 FROM t;
     END;
    //
    ERROR 1304 (42000): PROCEDURE simpleproc2 already exists
    
    DELIMITER ;
    Query OK, 0 rows affected (0.03 sec)
    CREATE TABLE `Employees` (
      `ID` TINYINT(3) UNSIGNED NOT NULL AUTO_INCREMENT,
      `First_Name` VARCHAR(25) NOT NULL,
      `Last_Name` VARCHAR(25) NOT NULL,
      `Position` VARCHAR(25) NOT NULL,
      `Home_Address` VARCHAR(50) NOT NULL,
      `Home_Phone` VARCHAR(12) NOT NULL,
      PRIMARY KEY (`ID`)
    ) ENGINE=MyISAM;
    INSERT INTO `Employees` (`First_Name`, `Last_Name`, `Position`, `Home_Address`, `Home_Phone`)
    VALUES
      ('Mustapha', 'Mond', 'Chief Executive Officer', '692 Promiscuous Plaza', '326-555-3492'),
      ('Henry', 'Foster', 'Store Manager', '314 Savage Circle', '326-555-3847'),
      ('Bernard', 'Marx', 'Cashier', '1240 Ambient Avenue', '326-555-8456'),
      ('Lenina', 'Crowne', 'Cashier', '281 Bumblepuppy Boulevard', '328-555-2349'),
      ('Fanny', 'Crowne', 'Restocker', '1023 Bokanovsky Lane', '326-555-6329'),
      ('Helmholtz', 'Watson', 'Janitor', '944 Soma Court', '329-555-2478');
    CREATE TABLE `Hours` (
      `ID` TINYINT(3) UNSIGNED NOT NULL,
      `Clock_In` DATETIME NOT NULL,
      `Clock_Out` DATETIME NOT NULL
    ) ENGINE=MyISAM;
    INSERT INTO `Hours`
    VALUES ('1', '2005-08-08 07:00:42', '2005-08-08 17:01:36'),
      ('1', '2005-08-09 07:01:34', '2005-08-09 17:10:11'),
      ('1', '2005-08-10 06:59:56', '2005-08-10 17:09:29'),
      ('1', '2005-08-11 07:00:17', '2005-08-11 17:00:47'),
      ('1', '2005-08-12 07:02:29', '2005-08-12 16:59:12'),
      ('2', '2005-08-08 07:00:25', '2005-08-08 17:03:13'),
      ('2', '2005-08-09 07:00:57', '2005-08-09 17:05:09'),
      ('2', '2005-08-10 06:58:43', '2005-08-10 16:58:24'),
      ('2', '2005-08-11 07:01:58', '2005-08-11 17:00:45'),
      ('2', '2005-08-12 07:02:12', '2005-08-12 16:58:57'),
      ('3', '2005-08-08 07:00:12', '2005-08-08 17:01:32'),
      ('3', '2005-08-09 07:01:10', '2005-08-09 17:00:26'),
      ('3', '2005-08-10 06:59:53', '2005-08-10 17:02:53'),
      ('3', '2005-08-11 07:01:15', '2005-08-11 17:04:23'),
      ('3', '2005-08-12 07:00:51', '2005-08-12 16:57:52'),
      ('4', '2005-08-08 06:54:37', '2005-08-08 17:01:23'),
      ('4', '2005-08-09 06:58:23', '2005-08-09 17:00:54'),
      ('4', '2005-08-10 06:59:14', '2005-08-10 17:00:12'),
      ('4', '2005-08-11 07:00:49', '2005-08-11 17:00:34'),
      ('4', '2005-08-12 07:01:09', '2005-08-12 16:58:29'),
      ('5', '2005-08-08 07:00:04', '2005-08-08 17:01:43'),
      ('5', '2005-08-09 07:02:12', '2005-08-09 17:02:13'),
      ('5', '2005-08-10 06:59:39', '2005-08-10 17:03:37'),
      ('5', '2005-08-11 07:01:26', '2005-08-11 17:00:03'),
      ('5', '2005-08-12 07:02:15', '2005-08-12 16:59:02'),
      ('6', '2005-08-08 07:00:12', '2005-08-08 17:01:02'),
      ('6', '2005-08-09 07:03:44', '2005-08-09 17:00:00'),
      ('6', '2005-08-10 06:54:19', '2005-08-10 17:03:31'),
      ('6', '2005-08-11 07:00:05', '2005-08-11 17:02:57'),
      ('6', '2005-08-12 07:02:07', '2005-08-12 16:58:23');
    SELECT
      `Employees`.`First_Name`,
      `Employees`.`Last_Name`,
      `Hours`.`Clock_In`,
      `Hours`.`Clock_Out`
    FROM `Employees`
    INNER JOIN `Hours` ON `Employees`.`ID` = `Hours`.`ID`
    WHERE `Employees`.`First_Name` = 'Helmholtz'
    AND DATE_FORMAT(`Hours`.`Clock_In`, '%Y-%m-%d') >= '2005-08-08'
    AND DATE_FORMAT(`Hours`.`Clock_In`, '%Y-%m-%d') <= '2005-08-12'
    AND DATE_FORMAT(`Hours`.`Clock_In`, '%H:%i:%S') > '07:00:59';
    +------------+-----------+---------------------+---------------------+
    | First_Name | Last_Name | Clock_In            | Clock_Out           |
    +------------+-----------+---------------------+---------------------+
    | Helmholtz  | Watson    | 2005-08-09 07:03:44 | 2005-08-09 17:00:00 |
    | Helmholtz  | Watson    | 2005-08-12 07:02:07 | 2005-08-12 16:58:23 |
    +------------+-----------+---------------------+---------------------+
    SELECT
      `Employees`.`First_Name`,
      `Employees`.`Last_Name`,
      `Hours`.`Clock_In`,
      `Hours`.`Clock_Out`,
      (TIMESTAMPDIFF(MINUTE,`Hours`.`Clock_Out`,`Hours`.`Clock_In`) + 601) AS Difference
    FROM `Employees`
    INNER JOIN `Hours` USING (`ID`)
    WHERE DATE_FORMAT(`Hours`.`Clock_In`, '%Y-%m-%d') >= '2005-08-08'
    AND DATE_FORMAT(`Hours`.`Clock_In`, '%Y-%m-%d') <= '2005-08-12'
    AND DATE_FORMAT(`Hours`.`Clock_In`, '%H:%i:%S') > '07:00:59'
    AND TIMESTAMPDIFF(MINUTE,`Hours`.`Clock_Out`,`Hours`.`Clock_In`) > -601;
    +------------+-----------+---------------------+---------------------+------------+
    | First_Name | Last_Name | Clock_In            | Clock_Out           | Difference |
    +------------+-----------+---------------------+---------------------+------------+
    | Mustapha   | Mond      | 2005-08-12 07:02:29 | 2005-08-12 16:59:12 |          4 |
    | Henry      | Foster    | 2005-08-11 07:01:58 | 2005-08-11 17:00:45 |          2 |
    | Henry      | Foster    | 2005-08-12 07:02:12 | 2005-08-12 16:58:57 |          4 |
    | Bernard    | Marx      | 2005-08-09 07:01:10 | 2005-08-09 17:00:26 |          1 |
    | Lenina     | Crowne    | 2005-08-12 07:01:09 | 2005-08-12 16:58:29 |          3 |
    | Fanny      | Crowne    | 2005-08-11 07:01:26 | 2005-08-11 17:00:03 |          2 |
    | Fanny      | Crowne    | 2005-08-12 07:02:15 | 2005-08-12 16:59:02 |          4 |
    | Helmholtz  | Watson    | 2005-08-09 07:03:44 | 2005-08-09 17:00:00 |          4 |
    | Helmholtz  | Watson    | 2005-08-12 07:02:07 | 2005-08-12 16:58:23 |          4 |
    +------------+-----------+---------------------+---------------------+------------+
    CREATE SQL SECURITY INVOKER VIEW Employee_Tardiness AS 
    SELECT
      `Employees`.`First_Name`,
      `Employees`.`Last_Name`,
      `Hours`.`Clock_In`,
      `Hours`.`Clock_Out`,
    (TIMESTAMPDIFF(MINUTE,`Hours`.`Clock_Out`,`Hours`.`Clock_In`) + 601) as Difference
    FROM `Employees`
    INNER JOIN `Hours` USING (`ID`)
    WHERE DATE_FORMAT(`Hours`.`Clock_In`, '%Y-%m-%d') >= '2005-08-08'
    AND DATE_FORMAT(`Hours`.`Clock_In`, '%Y-%m-%d') <= '2005-08-12'
    AND DATE_FORMAT(`Hours`.`Clock_In`, '%H:%i:%S') > '07:00:59'
    AND TIMESTAMPDIFF(MINUTE,`Hours`.`Clock_Out`,`Hours`.`Clock_In`) > -601;
    SELECT * FROM Employee_Tardiness;
    +------------+-----------+---------------------+---------------------+------------+
    | First_Name | Last_Name | Clock_In            | Clock_Out           | Difference |
    +------------+-----------+---------------------+---------------------+------------+
    | Mustapha   | Mond      | 2005-08-12 07:02:29 | 2005-08-12 16:59:12 |          5 |
    | Henry      | Foster    | 2005-08-11 07:01:58 | 2005-08-11 17:00:45 |          3 |
    | Henry      | Foster    | 2005-08-12 07:02:12 | 2005-08-12 16:58:57 |          5 |
    | Bernard    | Marx      | 2005-08-09 07:01:10 | 2005-08-09 17:00:26 |          2 |
    | Lenina     | Crowne    | 2005-08-12 07:01:09 | 2005-08-12 16:58:29 |          4 |
    | Fanny      | Crowne    | 2005-08-09 07:02:12 | 2005-08-09 17:02:13 |          1 |
    | Fanny      | Crowne    | 2005-08-11 07:01:26 | 2005-08-11 17:00:03 |          3 |
    | Fanny      | Crowne    | 2005-08-12 07:02:15 | 2005-08-12 16:59:02 |          5 |
    | Helmholtz  | Watson    | 2005-08-09 07:03:44 | 2005-08-09 17:00:00 |          5 |
    | Helmholtz  | Watson    | 2005-08-12 07:02:07 | 2005-08-12 16:58:23 |          5 |
    +------------+-----------+---------------------+---------------------+------------+
    SELECT * FROM Employee_Tardiness WHERE Difference >=5;
    +------------+-----------+---------------------+---------------------+------------+
    | First_Name | Last_Name | Clock_In            | Clock_Out           | Difference |
    +------------+-----------+---------------------+---------------------+------------+
    | Mustapha   | Mond      | 2005-08-12 07:02:29 | 2005-08-12 16:59:12 |          5 |
    | Henry      | Foster    | 2005-08-12 07:02:12 | 2005-08-12 16:58:57 |          5 |
    | Fanny      | Crowne    | 2005-08-12 07:02:15 | 2005-08-12 16:59:02 |          5 |
    | Helmholtz  | Watson    | 2005-08-09 07:03:44 | 2005-08-09 17:00:00 |          5 |
    | Helmholtz  | Watson    | 2005-08-12 07:02:07 | 2005-08-12 16:58:23 |          5 |
    +------------+-----------+---------------------+---------------------+------------+

    WHITE PAPER

    The Ultimate Guide to High Availability with MariaDB

    Download Now

    Cover
    ,
    DATETIME
    ,
    TIMESTAMP
    , and
    YEAR
    . The
    DATE
    column type is for recording the date only and is basically in this format:
    yyyy-mm-dd
    . The
    TIME
    column type is for recording time in this format:
    hhh:mm:ss
    . To record a combination of date and time, there is the
    DATETIME
    column type:
    yyyy-mm-dd hh:mm:ss
    .

    The TIMESTAMP column is similar to DATETIME, but it's a little limited in its range of allowable time. It starts at the Unix epoch time (1970-01-01) and ends on 2106-02-07. Finally, the YEAR data type is for recording only the year in a column: yy or yyyy. For the examples in this article, DATE, TIME, and DATETIME columns are used. The database that are referenced is for a fictitious psychiatry practice that keeps track of its patients and billable hours in MariaDB.

    The TIMESTAMP column is similar to DATETIME, but it's a little limited in its range of allowable time. It starts at the Unix epoch time (i.e., 1970-01-01) and ends on 2038-01-19. Finally, the YEAR data type is for recording only the year in a column: yy or yyyy. For the examples in this article, DATE, TIME, and DATETIME columns are used. The database that are referenced is for a fictitious psychiatry practice that keeps track of its patients and billable hours in MariaDB.

    Telling Time

    To record the current date and time in a MariaDB table, there are a few built-in functions that may be used. First, to record the date there are the functions CURRENT_DATE and CURDATE( ) (depending on your style), which both produce the same results (e.g., 2017-08-01). Notice that CURDATE( ) requires parentheses and the other does not. With many functions a column name or other variables are placed inside of the parentheses to get a result. With functions like CURDATE( ), there is nothing that may go inside the parenthesis. Since these two functions retrieve the current date in the format of the DATE column type, they can be used to fill in a DATE column when inserting a row:

    The column session_date is a DATE column. Notice that there are no quotes around the date function. If there were it would be taken as a literal value rather than a function. Incidentally, I've skipped discussing how the table was set up. If you're not familiar with how to set up a table, you may want to read the MariaDB Basics article. To see what was just recorded by the INSERT statement above, the following may be entered (results follow):

    Notice in the billable_work table that the primary key column (i.e., rec_id) is an automatically generated and incremental number column (i.e., AUTO_INCREMENT). As long as another record is not created or the user does not exit from the mariadb client or otherwise end the session, the LAST_INSERT_ID( ) function will retrieve the value of the rec_id for the last record entered by the user.

    To record the time of an appointment for a patient in a time data type column, CURRENT_TIME or CURTIME( ) are used in the same way to insert the time. The following is entered to update the row created above to mark the starting time of the appointment—another SELECT statement follows with the results:

    The column session_time is a time column. To record the date and time together in the same column, CURRENT_TIMESTAMP or SYSDATE( ) or NOW( ) can be used. All three functions produce the same time format: yyyy-mm-dd hh:mm:ss. Therefore, the column's data type would have to be DATETIME to use them.

    How to get a Date

    Although MariaDB records the date in a fairly agreeable format, you may want to present the date when it's retrieved in a different format. Or, you may want to extract part of the date, such as only the day of the month. There are many functions for reformatting and selectively retrieving date and time information. To start off with, let's select a column with a data type of DATE and look at the functions available for retrieving each component. To extract the year, there's the YEAR( ) function. For extracting just the month, the MONTH( ) function could be called upon. And to grab the day of the month, DAYOFMONTH( ) will work. Using the record entered above, here's what an SQL statement and its results would look like in which the session date is broken up into separate parts, but in a different order:

    For those who aren't familiar with the keyword AS, it's used to label a column's output and may be referenced within an SQL statement. Splitting up the elements of a date can be useful in analyzing a particular element. If the bookkeeper of the fictitious psychiatry office needed to determine if the day of the week of each session was on a Saturday because the billing rate would be higher (time and a half), the DAYOFWEEK( ) function could be used. To spice up the examples, let's wrap the date function up in an IF( ) function that tests for the day of the week and sets the billing rate accordingly.

    Since we've slipped in the IF( ) function, we should explain it's format. The test condition is listed first within the parentheses. In this case, the test is checking if the session date is the sixth day of the week. Then, what MariaDB should display is given if the test passes, followed by the result if it fails.

    Similar to the DAYOFWEEK( ) function, there's also WEEKDAY( ). The only difference is that for DAYOFWEEK( ) the first day of the week is Sunday—with WEEKDAY( ) the first day is Monday. Both functions represent the first day with 0 and the last with 6. Having Saturday and Sunday symbolized by 5 and 6 can be handy in constructing an IF statement that has a test component like "WEEKDAY(session_date) > 4" to determine if a date is a weekend day. This is cleaner than testing for values of 0 and 6.

    There is a function for determining the day of the year: DAYOFYEAR( ). It's not used often, but it is available if you ever need it. Occasionally, though, knowing the quarter of a year for a date can be useful for financial accounting. Rather than set up a formula in a script to determine the quarter, the QUARTER( ) function can do this easily. For instance, suppose an accountant wants a list of a doctor's sessions for each patient for the previous quarter. These three SQL statements could be entered in sequence to achieve the results that follow:

    This example is the most complicated so far. But it's not too difficult to understand if we pull it apart. The first SQL statement sets up a user variable containing the previous quarter (i.e., 1, 2, 3, or 4). This variable are needed in the other two statements. The IF( ) clause in the first statement checks if the quarter of the current date minus one is zero. It will equal zero when it's run during the first quarter of a year. During a first quarter, of course, the previous quarter is the fourth quarter of the previous year. So, if the equation equals zero, then the variable @LASTQTR is set to 4. Otherwise, @LASTQTR is set to the value of the current quarter minus one. The second statement is necessary to ensure that the records for the correct year are selected. So, if @LASTQTR equals four, then @YR needs to equal last year. If not, @YR is set to the current year. With the user variables set to the correct quarter and year, the SELECT statement can be entered. The COUNT( ) function counts the number of appointments that match the WHERE clause for each patient based on the GROUP BY clause. The WHERE clause looks for sessions with a quarter that equals @LASTQTR and a year that equals @YR, as well as the doctor's identification number. In summary, what we end up with is a set of SQL statements that retrieve the desired information regardless of which quarter or year it's entered.

    What is the Time?

    The last section covered how to retrieve pieces of a date column. Now let's look at how to do the same with a time column. To extract just the hour of a time saved in MariaDB, the HOUR( ) function could be used. For the minute and second, there's MINUTE( ) and SECOND( ). Let's put them all together in one straightforward SELECT statement:

    Date & Time Combined

    All of the examples given so far have involved separate columns for date and time. The EXTRACT( ) function, however, will allow a particular component to be extracted from a combined column type (i.e., DATETIME or TIMESTAMP). The format is EXTRACT(date_type FROM date_column) where date_type is the component to retrieve and date_column is the name of the column from which to extract data. To extract the year, the date_type would be YEAR; for month, MONTH is used; and for day, there's DAY. To extract time elements, HOUR is used for hour, MINUTE for minute, and SECOND for second. Although that's all pretty simple, let's look at an example. Suppose the table billable_work has a column called appointment (a datetime column) that contains the date and time for which the appointment was scheduled (as opposed to the time it actually started in session_time). To get the hour and minute for a particular date, the following SQL statement will suffice:

    This statement calls upon another table (patients) which holds patient information such as their names. It requires a connecting point between the tables (i.e., the patient_id from each table). If you're confused on how to form relationships between tables in a SELECT statement, you may want to go back and read the Getting Data from MariaDB article. The SQL statement above would be used to retrieve the appointments for one doctor for one day, giving results like this:

    In this example, the time elements are separated and they don't include the date. With the EXTRACT( ) function, however, you can also return combined date and time elements. There is DAY_HOUR for the day and hour; there's DAY_MINUTE for the day, hour, and minute; DAY_SECOND for day, hour, minute, and second; and YEAR_MONTH for year and month. There are also some time only combinations: HOUR_MINUTE for hour and minute; HOUR_SECOND for hour, minute, and second; and MINUTE_SECOND for minute and second. However, there's not a MONTH_DAY to allow the combining of the two extracts in the WHERE clause of the last SELECT statement above. Nevertheless, we'll modify the example above and use the HOUR_MINUTE date_type to retrieve the hour and minute in one resulting column. It would only require the second and third lines to be deleted and replaced with this:

    The problem with this output, though, is that the times aren't very pleasing looking. For more natural date and time displays, there are a few simple date formatting functions available and there are the DATE_FORMAT( ) and TIME_FORMAT( ) functions.

    Fine Time Pieces

    The simple functions that we mentioned are used for reformatting the output of days and months. To get the date of patient sessions for August, but in a more wordier format, MONTHNAME( ) and DAYNAME( ) could be used:

    In this statement the CONCAT( ) splices together the results of several date functions along with spaces and other characters. The EXTRACT( ) function was eliminated from the WHERE clause and instead a simple numeric test for sessions in August was given. Although EXTRACT( ) is fairly straightforward, this all can be accomplished with less typing by using the DATE_FORMAT( ) function.

    The DATE_FORMAT( ) function has over thirty options for formatting the date to your liking. Plus, you can combine the options and add your own separators and other text. The syntax is DATE_FORMAT(date_column, 'options & characters'). As an example, let's reproduce the last SQL statement by using the DATE_FORMAT( ) function for formatting the date of the appointment and for scanning for appointments in July only:

    This produces the exact same output as above, but with a more succinct statement. The option %W gives the name of the day of the week. The option %M provides the month's name. The option %e displays the day of the month (%d would work, but it left-pads single-digit dates with zeros). Finally, %Y is for the four character year. All other elements within the quotes (i.e., the spaces, the dash, and the comma) are literal characters for a nicer display.

    With DATE_FORMAT( ), time elements of a field also can be formatted. For instance, suppose we also wanted the hour and minute of the appointment. We would only need to change the second line of the SQL statement above (to save space, patient_name was eliminated):

    The word at was added along with the formatting option %r which gives the time with AM or PM at the end.

    Although it may be a little confusing at first, once you've learned some of the common formatting options, DATE_FORMAT( ) is much easier to use than EXTRACT( ). There are many more options to DATE_FORMAT( ) that we haven't mentioned. For a complete list of the options available, see the DATE_FORMAT( ) documentation page.

    Clean up Time

    In addition to DATE_FORMAT( ), MariaDB has a comparable built-in function for formating only time: TIME_FORMAT( ). The syntax is the same and uses the same options as DATE_FORMAT( ), except only the time related formatting options apply. As an example, here's an SQL statement that a doctor might use at the beginning of each day to get a list of her appointments for the day:

    The option %l provides the hours 01 through 12. The %p at the end indicates (with the AM or PM) whether the time is before or after noon. The %i option gives the minute. The colon and the space are for additional display appeal. Of course, all of this can be done exactly the same way with the DATE_FORMAT( ) function. As for the DATE_FORMAT( ) component in the WHERE clause here, the date is formatted exactly as it are with CURDATE( ) (i.e., 2017-08-30) so that they may be compared properly.

    Time to End

    Many developers use PHP, Perl, or some other scripting language with MariaDB. Sometimes developers will solve retrieval problems with longer scripts rather than learn precisely how to extract temporal data with MariaDB. As you can see in several of the examples here (particularly the one using the QUARTER( ) function), you can accomplish a great deal within MariaDB. When faced with a potentially complicated SQL statement, try creating it in the mariadb client first. Once you get what you need (under various conditions) and in the format desired, then copy the statement into your script. This practice can greatly help you improve your MariaDB statements and scripting code.

    CC BY-SA / Gnu FDL

    INSERT INTO billable_work
    (doctor_id, patient_id, session_date)
    VALUES('1021', '1256', CURRENT_DATE);
    SELECT rec_id, doctor_id, 
    patient_id, session_date
    FROM billable_work
    WHERE rec_id=LAST_INSERT_ID();
    
    +--------+-----------+------------+--------------+
    | rec_id | doctor_id | patient_id | session_date |
    +--------+-----------+------------+--------------+
    |   2462 | 1021      | 1256       | 2017-08-23   |
    +--------+-----------+------------+--------------+
    UPDATE billable_work
    SET session_time=CURTIME()
    WHERE rec_id='2462';
    
    SELECT patient_id, session_date, session_time
    FROM billable_work
    WHERE rec_id='2462';
    
    +------------+--------------+--------------+
    | patient_id | session_date | session_time |
    +------------+--------------+--------------+
    | 1256       | 2017-08-23   | 10:30:23     |
    +------------+--------------+--------------+
    SELECT MONTH(session_date) AS Month,
    DAYOFMONTH(session_date) AS Day,
    YEAR(session_date) AS Year
    FROM billable_work
    WHERE rec_id='2462';
    
    +-------+------+------+
    | Month | Day  | Year |
    +-------+------+------+
    |     8 |   23 | 2017 |
    +-------+------+------+
    SELECT patient_id AS 'Patient ID',
    session_date AS 'Date of Session',
    IF(DAYOFWEEK(session_date)=6, 1.5, 1.0)
      AS 'Billing Rate' 
    FROM billable_work
    WHERE rec_id='2462';
    
    +-------------+-----------------+--------------+
    | Patient ID  | Date of Session | Billing Rate |
    +-------------+-----------------+--------------+
    |        1256 |    2017-08-23   |          1.5 |
    +-------------+-----------------+--------------+
    SET @LASTQTR:=IF((QUARTER(CURDATE())-1)=0, 
    4, QUARTER(CURDATE())-1);
    
    SET @YR:=IF(@LASTQTR=4, 
    YEAR(NOW())-1, YEAR(NOW()));
    
    SELECT patient_id AS 'Patient ID', 
    COUNT(session_time) 
      AS 'Number of Sessions'
    FROM billable_work
    WHERE QUARTER(session_date) = @LASTQTR
      AND YEAR(session_date) = @YR
      AND doctor_id='1021'
    GROUP BY patient_id
    ORDER BY patient_id LIMIT 5;
    
    +------------+--------------------+
    | Patient ID | Number of Sessions |
    +------------+--------------------+
    | 1104       |                 10 |
    | 1142       |                  7 |
    | 1203       |                 18 |
    | 1244       |                  6 |
    | 1256       |                 12 |
    +------------+--------------------+
    SELECT HOUR(session_time) AS Hour, 
    MINUTE(session_time) AS Minute, 
    SECOND(session_time) AS Second
    FROM billable_work
    WHERE rec_id='2462';
    
    +------+--------+--------+
    | Hour | Minute | Second |
    +------+--------+--------+
    |   10 |     30 |     00 |
    +------+--------+--------+
    SELECT patient_name AS Patient, 
    EXTRACT(HOUR FROM appointment) AS Hour, 
    EXTRACT(MINUTE FROM appointment) AS Minute
    FROM billable_work, patients
    WHERE doctor_id='1021' 
      AND EXTRACT(MONTH FROM appointment)='8' 
      AND EXTRACT(DAY FROM appointment)='30'
      AND billable_work.patient_id =
        patients.patient_id;
    +-------------------+------+--------+
    | Patient           | Hour | Minute |
    +-------------------+------+--------+
    | Michael Zabalaoui |   10 |     00 |
    | Jerry Neumeyer    |   11 |     00 |
    | Richard Stringer  |   13 |     30 |
    | Janice Sogard     |   14 |     30 |
    +-------------------+------+--------+
    ...
    EXTRACT(HOUR_MINUTE FROM appointment) 
      AS Appointment 
    ...
    
    +-------------------+-------------+
    | Patient           | Appointment |
    +-------------------+-------------+
    | Michael Zabalaoui |        1000 |
    | Jerry Neumeyer    |        1100 |
    | Richard Stringer  |        1330 |
    | Janice Sogard     |        1430 |
    +-------------------+-------------+
    SELECT patient_name AS Patient, 
    CONCAT(DAYNAME(appointment), ' - ', 
      MONTHNAME(appointment), ' ', 
      DAYOFMONTH(appointment), ', ', 
      YEAR(appointment)) AS Appointment
    FROM billable_work, patients
    WHERE doctor_id='1021'
      AND billable_work.patient_id =
        patients.patient_id
      AND appointment>'2017-08-01' 
      AND appointment<'2017-08-31' 
    LIMIT 1;
    
    +-------------------+-----------------------------+
    | Patient           | Appointment                 |
    +-------------------+-----------------------------+
    | Michael Zabalaoui | Wednesday - August 30, 2017 |
    +-------------------+-----------------------------+
    SELECT patient_name AS Patient, 
    DATE_FORMAT(appointment, '%W - %M %e, %Y') 
      AS Appointment
    FROM billable_work, patients
    WHERE doctor_id='1021'
      AND billable_work.patient_id = 
        patients.patient_id
      AND DATE_FORMAT(appointment, '%c') = 8
    LIMIT 1;
    SELECT 
    DATE_FORMAT(appointment, '%W - %M %e, %Y at %r') 
      AS Appointment
    ...
    
    +--------------------------------------------+
    | Appointment                                |
    +--------------------------------------------+
    | Wednesday - August 30, 2017 at 02:11:19 AM |
    +--------------------------------------------+
    SELECT patient_name AS Patient, 
    TIME_FORMAT(appointment, '%l:%i %p') 
      AS Appointment
    FROM billable_work, patients
    WHERE doctor_id='1021'
      AND billable_work.patient_id = 
        patients.patient_id
      AND DATE_FORMAT(appointment, '%Y-%m-%d') =
        CURDATE();
    
    +-------------------+-------------+
    | Patient           | Appointment |
    +-------------------+-------------+
    | Michael Zabalaoui |    10:00 AM |
    | Jerry Neumeyer    |    11:00 AM |
    | Richard Stringer  |    01:30 PM |
    | Janice Sogard     |    02:30 PM |
    +-------------------+-------------+
    Cover

    WEBINAR

    MariaDB 101: Learning the Basics of MariaDB

    mariadb-backup Overview

    An introduction to the mariadb-backup utility, detailing its features, installation process, and support for hot online backups of InnoDB tables.

    mariadb-backup was previously called mariabackup.

    mariadb-backup is an open source tool provided by MariaDB for performing physical online backups of InnoDB, Aria and MyISAM tables. For InnoDB, “hot online” backups are possible. It was originally forked from Percona XtraBackup 2.3.8. It is available on Linux and Windows.

    This tool provides a production-quality, nearly non-blocking method for performing full backups on running systems. While partial backups with mariadb-backup are technically possible, they require many steps and cannot be restored directly onto existing servers containing other data.

    Supported Features

    mariadb-backup supports all of the main features of Percona XtraBackup 2.3.8, plus:

    • Backup/Restore of tables using Data-at-Rest Encryption.

    • Backup/Restore of tables using InnoDB Page Compression.

    • with Galera Cluster.

    • Microsoft Windows support.

    Supported Features in MariaDB Enterprise Backup

    MariaDB Backup supports some additional features, such as:

    • Minimizes locks during the backup to permit more concurrency and to enable faster backups.

      • This relies on the usage of BACKUP STAGE commands and DDL logging.

      • This includes no locking during the copy phase of ALTER TABLE statements, which tends to be the longest phase of these statements.

    Installing mariadb-backup

    Installing on Linux

    The mariadb-backup executable is included in binary tarballs on Linux.

    Installing with a Package Manager

    mariadb-backup can also be installed via a package manager on Linux. Many Linux distributions provide MariaDB software "out of the box", including mariadb-backup. If your Linux distribution doesn't, however, you can install using a MariaDB repository.

    In order to do so, your system needs to be configured to install from one of the MariaDB repositories.

    You can configure your package manager to install it from MariaDB Corporation's MariaDB Package Repository by using the MariaDB Package Repository setup script.

    You can also configure your package manager to install it from MariaDB Foundation's MariaDB Repository by using the .

    Installing with yum/dnf

    On RHEL, CentOS, Fedora, and other similar Linux distributions, it is highly recommended to install the relevant RPM package from MariaDB's repository using yum or . Starting with RHEL 8 and Fedora 22, yum has been replaced by dnf, which is the next major version of yum. However, yum commands still work on many systems that use dnf. For example:

    Installing with apt-get

    On Debian, Ubuntu, and other similar Linux distributions, it is highly recommended to install the relevant DEB package from MariaDB's repository using . For example:

    Installing with zypper

    On SLES, OpenSUSE, and other similar Linux distributions, it is highly recommended to install the relevant RPM package from MariaDB's repository using zypper. For example:

    Installing on Windows

    The mariadb-backup executable is included in MSI and ZIP packages on Windows.

    When using the , mariadb-backup can be installed by selecting Backup utilities:

    Usage

    The command to use mariadb-backup and the general syntax is:

    For in-depth explanations on how to use mariadb-backup, see:

    Options

    Options supported by mariadb-backup can be found on the page.

    mariadb-backup will currently silently ignore unknown command-line options, so be extra careful about accidentally including typos in options or accidentally using options from later mariadb-backup versions. The reason for this is that mariadb-backup currently treats command-line options and options from equivalently. When it reads from these , it has to read a lot of options from the server option groups read by . However, mariadb-backup does not know about many of the options that it normally reads in these option groups. If mariadb-backup raised an error or warning when it encountered an unknown option, then this process would generate a large amount of log messages under normal use. Therefore, mariadb-backup is designed to silently ignore the unknown options instead. See

    Option Files

    In addition to reading options from the command-line, mariadb-backup can also read options from option files.

    The following options relate to how MariaDB command-line tools handles option files. They must be given as the first argument on the command-line:

    Option
    Description

    Server Option Groups

    mariadb-backup reads server options from the following from :

    Group
    Description

    Client Option Groups

    mariadb-backup reads client options from the following option groups from option files:

    Group
    Description

    Backup History Table

    mariadb-backup can optionally track your backup operations in a database table. This provides a centralized audit log and allows you to automate incremental backups by referencing the logical name of the previous backup instead of managing file paths.

    Table Location and Schema Changes (MariaDB 10.11):

    • MariaDB 10.11 and later: The history table is mysql.mariadb_backup_history and uses the InnoDB storage engine (transactional).

    • MariaDB 10.10 and earlier: The history table is PERCONA_SCHEMA.xtrabackup_history and uses the CSV storage engine.

    On the first run after upgrading to MariaDB 10.11, mariadb-backup will attempt to migrate the old CSV table to the new InnoDB table. This requires specific privileges (see below).

    Authentication and Privileges

    mariadb-backup needs to authenticate with the database server when it performs a backup operation (i.e. when the option is specified). For most use cases, the user account that performs the backup needs to have the following global privileges on the database server.

    The required privileges are:

    The required privileges are:

    If your database server is also using the , then the user account that performs the backup will also need the SUPER . This is because mariadb-backup creates a checkpoint of this data by setting the system variable, which requires this privilege. See for more information.

    CONNECTION ADMIN is also required where is greater than 0, and isn't applied in order to queries.

    REPLICA MONITOR (or alias SLAVE MONITOR) is also required where or is specified.

    To use the option(or the incremental history options), the backup user requires specific privileges on the history table.

    The user needs INSERT to create history records and SELECT to read them for incremental backups:

    The user needs privileges on the legacy PERCONA_SCHEMA:

    For Upgrading to 10.11 (One-Time Migration)

    If upgrading from an older version, mariadb-backup will attempt to migrate the old table to the new location on the first run. The backup user needs privileges to move and modify the old table:

    Alternatively, you can perform this migration manually before running the backup:

    The user account information can be specified with the and command-line options. For example:

    The user account information can also be specified in a supported in an . For example:

    mariadb-backup does not need to authenticate with the database server when preparing or restoring a backup.

    File System Permissions

    mariadb-backup has to read MariaDB's files from the file system. Therefore, when you run mariadb-backup as a specific operating system user, you should ensure that user account has sufficient permissions to read those files.

    If you are using Linux and if you installed MariaDB with a package manager, then MariaDB's files will probably be owned by the mysql user and the mysql group.

    Using mariadb-backup with Data-at-Rest Encryption

    mariadb-backup supports .

    mariadb-backup will query the server to determine which is being used, and then it will load that plugin itself, which means that mariadb-backup needs to be able to load the key management and encryption plugin's shared library.

    mariadb-backup will also query the server to determine which it needs to use.

    In other words, mariadb-backup is able to figure out a lot of encryption-related information on its own, so normally one doesn't need to provide any extra options to backup or restore encrypted tables.

    mariadb-backup backs up encrypted and unencrypted tables as they are on the original server. If a table is encrypted, then the table will remain encrypted in the backup. Similarly, if a table is unencrypted, then the table will remain unencrypted in the backup.

    The primary reason that mariadb-backup needs to be able to encrypt and decrypt data is that it needs to apply records to make the data consistent when the backup is prepared. As a consequence, mariadb-backup does not perform many encryption or decryption operations when the backup is initially taken. MariaDB performs more encryption and decryption operations when the backup is prepared. This means that some encryption-related problems (such as using the wrong encryption keys) may not become apparent until the backup is prepared.

    Using mariadb-backup for Galera SSTs

    The mariadb-backup SST method uses the mariadb-backup utility for performing SSTs. See for more information.

    Files Backed up by mariadb-backup

    mariadb-backup backs up many different files in order to perform its backup operation. See for a list of these files.

    Files Created by mariadb-backup

    mariadb-backup creates several different types of files during the backup and prepare phases. See for a list of these files.

    Binary Logs

    mariadb-backup can store the binary log position in the backup. See . This can be used for point-in-time recovery and to use the backup to setup a slave with the correct binlog position.

    Known Issues

    No Default Data Directory

    mariadb-backup defaults to the server's default datadir value. See for more information.

    Too Many Open Files

    If mariadb-backup uses more file descriptors than the system is configured to allow, then users can see errors like the following:

    mariadb-backup throws an error and aborts when this error is encountered. See for more information.

    When this error is encountered, one solution is to explicitly specify a value for the option either on the command line or in one of the supported s in an . For example:

    An alternative solution is to set the soft and hard limits for the user account that runs mariadb-backup by adding new limits to . For example, if mariadb-backup is run by the mysql user, then you could add lines like the following:

    After the system is rebooted, the above configuration should set new open file limits for the mysql user, and the user's ulimit output should look like the following:

    See Also

    • (video)

    • (video)

    • (video)

    This page is licensed: CC BY-SA / Gnu FDL

    MariaDB Enterprise Backup

    This page details MariaDB Enterprise Backup, an enhanced version of mariadb-backup with enterprise-specific optimizations and support.

    Overview

    Regular and reliable backups are essential to successful recovery of mission critical applications. backup and restore operations are performed using MariaDB Enterprise Backup, an enterprise-build of .

    MariaDB Enterprise Backup is compatible with MariaDB Enterprise Server.

    Watch Now
    Backup/Restore of tables using the MyRocks storage engine. See Files Backed up by mariadb-backup: MyRocks Data Files for more information.
  • Provides optimal backup support for all storage engines that store things on local disk.

  • MariaDB Backup does not support some additional features.

    Setting up a Replica with mariadb-backup
  • Using Encryption and Compression Tools With mariadb-backup

  • about that.

    [mariadb]

    Options read by MariaDB Server.

    [mariadb-X.Y]

    Options read by a specific version of MariaDB Server. For example: [mariadb-10.6].

    [mariadbd]

    Options read by MariaDB Server. Available from MariaDB 10.5.4.

    [mariadbd-X.Y]

    Options read by a specific version of MariaDB Server. For example: [mariadbd-10.6]. Available from MariaDB 10.5.4.

    [client-server]

    Options read by all MariaDB client programs and the MariaDB Server. This is useful for options like socket and port, which is common between the server and the clients.

    [galera]

    Options read by MariaDB Server, but only if it is compiled with Galera Cluster support. All builds on Linux are compiled with Galera Cluster support. When using one of these builds, options from this option group are read even if the Galera Cluster functionality is not enabled.

    --print-defaults

    Print the program argument list and exit.

    --no-defaults

    Don't read default options from any option file.

    --defaults-file=#

    Only read default options from the given option file.

    --defaults-extra-file=#

    Read this file after the global files are read.

    --defaults-group-suffix=#

    In addition to the default option groups, also read option groups with this suffix.

    [mariadb-backup]

    Options read by mariadb-backup.

    [mariadb-backup]

    Options read by mariadb-backup.

    [xtrabackup]

    Options read by mariadb-backup and Percona XtraBackup.

    [server]

    Options read by MariaDB Server.

    [mysqld]

    Options read by mariadbd, which includes both MariaDB Server and MySQL Server (where it is called mysqld).

    [mysqld-X.Y]

    Options read by a specific version of mysqld, which includes both MariaDB Server and MySQL Server. For example: [mysqld-10.6].

    [mariadb-backup]

    Options read by mariadb-backup. Available starting with and .

    [mariadb-backup]

    Options read by mariadb-backup. Available starting with and

    [xtrabackup]

    Options read by mariadb-backup and Percona XtraBackup.

    [client]

    Options read by all MariaDB and MySQL client programs, which includes both MariaDB and MySQL clients. For example, mysqldump.

    [client-server]

    Options read by all MariaDB client programs and the MariaDB Server. This is useful for options like socket and port, which is common between the server and the clients. Available starting with , , and .

    [client-mariadb]

    Options read by all MariaDB client programs. Available starting with , , and .

    MariaDB Repository Configuration Tool
    dnf
    apt-get
    Windows MSI installer
    Full Backup and Restore with mariadb-backup
    Incremental Backup and Restore with mariadb-backup
    Partial Backup and Restore with mariadb-backup
    Restoring Individual Tables and Partitions with mariadb-backup
    mariadb-backup Options
    option files
    option files
    mariadbd
    MDEV-18215
    option groups
    option files
    --backup
    MyRocks storage engine
    global privilege
    rocksdb_create_checkpoint
    MDEV-20577
    --kill-long-queries-timeout
    --no-lock
    KILL
    --galera-info
    --slave-info
    --history
    --user
    --password
    client option group
    option file
    Data-at-Rest Encryption
    key management and encryption plugin
    encryption keys
    InnoDB redo log
    Files Backed up by mariadb-backup
    Files Created by mariadb-backup
    --binlog-info
    MDEV-12956
    MDEV-19060
    --open-files-limit
    server option group
    option file
    /etc/security/limits.conf
    mariadb-dump/mysqldump
    How to backup with MariaDB
    MariaDB point-in-time recovery
    mariadb-backup and Restic
    MariaDB MSI Installer showing the Backup utilites install option

    Non-blocking Backups

  • Understanding Recovery

  • Creating the Backup User

  • Full Backup and Restore

  • Incremental Backup and Restore

  • Partial Backup and Restore

  • Point-in-Time Recoveries

  • Storage Engines and Backup Types

    MariaDB Backup creates a file-level backup of data from the MariaDB Community Server data directory. This backup includes temporal data, and the encrypted and unencrypted tablespaces of supported storage engines (e.g., InnoDB, MyRocks, Aria).

    MariaDB Enterprise Server implements:

    • Full backups, which contain all data in the database.

    • Incremental backups, which contain modifications since the last backup.

    • Partial backups, which contain a subset of the tables in the database.

    Backup support is specific to storage engines. All supported storage engines enable full backup. The InnoDB storage engine additionally supports incremental backup.

    Note: MariaDB Enterprise Backup does not support backups of MariaDB ColumnStore. Backup of MariaDB ColumnStore can be performed using . Backup of data ingested to MariaDB ColumnStore can also occur pre-ingestion, such as in the case of HTAP where backup could occur of transactional data in MariaDB Enterprise Server, and restore of data to MariaDB ColumnStore would then occur through reprocessing..

    Nonblocking Backups

    A feature of MariaDB Enterprise Backup and MariaDB Enterprise Server, non-blocking backups minimize workload impact during backups. When MariaDB Enterprise Backup connects to MariaDB Enterprise Server, staging operations are initiated to protect data during read.

    Non-blocking backup functionality differs from historical backup functionality in the following ways:

    • MariaDB Enterprise Backup in MariaDB Enterprise Server includes enterprise-only optimizations to backup staging, including DDL statement tracking, which reduces lock-time during backups.

    • MariaDB Backup in MariaDB Community Server 10.4 and later will block writes, log tables, and statistics.

    • Older MariaDB Community Server releases used FLUSH TABLES WITH READ LOCK, which closed open tables and only allowed tables to be reopened with a read lock during the duration of backups.

    Understanding Recovery

    MariaDB Enterprise Backup creates complete or incremental backups of MariaDB Enterprise Server data, and is also used to restore data from backups produced using MariaDB Enterprise Backup.

    Preparing Backups for Recovery

    Full backups produced using MariaDB Enterprise Server are not initially point-in-time consistent, and an attempt to restore from a raw full backup will cause InnoDB to crash to protect the data.

    Incremental backups produced using MariaDB Enterprise Backup contain only the changes since the last backup and cannot be used standalone to perform a restore.

    To restore from a backup, you first need to prepare the backup for point-in-time consistency using the --prepare command:

    • Running the --prepare command on a full backup synchronizes the tablespaces, ensuring that they are point-in-time consistent and ready for use in recovery.

    • Running the --prepare command on an incremental backup synchronizes the tablespaces and also applies the updated data into the previous full backup, making it a complete backup ready for use in recovery.

    • Running the --prepare command on data that is to be used for a partial restore (when restoring only one or more selected tables) requires that you also use the --export option to create the necessary .cfg files to use in recovery.

    Restore Requires Empty Data Directory

    When MariaDB Enterprise Backup restores from a backup, it copies or moves the backup files into the MariaDB Enterprise Server data directory, as defined by the datadir system variable.

    For MariaDB Backup to safely restore data from full and incremental backups, the data directory must be empty. One way to achieve this is to move the data directory aside to a unique directory name:

    1. Make sure that the Server is stopped.

    2. Move the data directory to a unique name (e.g., /var/lib/mysql-2020-01-01) OR remove the old data directory (depending on how much space you have available).

    3. Create a new (empty) data directory (e.g., mkdir /var/lib/mysql).

    4. Run MariaDB Backup to restore the databases into that directory.

    5. Change the ownership of all the restored files to the correct system user (e.g., chown -R mysql:mysql /var/lib/mysql).

    6. Start MariaDB Enterprise Server, which now uses the restored data directory.

    7. When ready, and if you have not already done so, delete the old data directory to free disk space.

    Creating the Backup User

    When MariaDB Backup performs a backup operation, it not only copies files from the data directory but also connects to the running MariaDB Enterprise Server.

    This connection to MariaDB Enterprise Server is used to manage locks that prevent the Server from writing to a file while being read for a backup.

    MariaDB Backup establishes this connection based on the user credentials specified with the --user and --password options when performing a backup.

    It is recommended that a dedicated user be created and authorized to perform backups.

    MariaDB Backup requires this user to have the RELOAD, PROCESS, LOCK TABLES, and REPLICATION CLIENT privileges.

    In the above example, MariaDB Backup would run on the local system that runs MariaDB Enterprise Server. Where backups may be run against a remote server, the user authentication and authorization should be adjusted.

    While MariaDB Backup requires a user for backup operations, no user is required for restore operations since restores occur while MariaDB Enterprise Server is not running.

    MariaDB Backup requires this user to have the RELOAD, PROCESS, LOCK TABLES, and REPLICATION CLIENT privileges.

    In the above example, MariaDB Backup would run on the local system that runs MariaDB Enterprise Server. Where backups may be run against a remote server, the user authentication and authorization should be adjusted.

    While MariaDB Backup requires a user for backup operations, no user is required for restore operations since restores occur while MariaDB Enterprise Server is not running.

    Full Backup and Restore

    Full backups performed with MariaDB Backup contain all table data present in the database.

    When performing a full backup, MariaDB Backup makes a file-level copy of the MariaDB Enterprise Server data directory. This backup omits log data such as the binary logs (binlog), error logs, general query logs, and slow query logs.

    Performing Full Backups

    When you perform a full backup, MariaDB Backup writes the backup to the --target-dir path. The directory must be empty or non-existent and the operating system user account must have permission to write to that directory. A database user account is required to perform the backup.

    The version of mariadb-backup or mariadb-backup should be the same version as the MariaDB Enterprise Server version. When the version does not match the server version, errors can sometimes occur, or the backup can sometimes be unusable.

    To create a backup, execute mariadb-backup or mariadb-backup with the --backup option, and provide the database user account credentials using the --user and --password options:

    Subsequent to the above example, the backup is now available in the designated --target-dir path.

    Preparing a Full Backup for Recovery

    A raw full backup is not point-in-time consistent and must be prepared before it can be used for a restore. The backup can be prepared any time after the backup is created and before the backup is restored. However, MariaDB recommends preparing a backup immediately after taking the backup to ensure that the backup is consistent.

    The backup should be prepared with the same version of MariaDB Backup that was used to create the backup.

    To prepare the backup, execute mariadb-backup or mariadb-backup with the --prepare option:

    For best performance, the --use-memory option should be set to the server's innodb_buffer_pool_size value.

    Restoring from Full Backups

    Once a full backup has been prepared to be point-in-time consistent, MariaDB Backup is used to copy backup data to the MariaDB Enterprise Server data directory.

    To restore from a full backup:

    1. Stop the MariaDB Enterprise Server

    2. Empty the data directory

    3. Restore from the "full" directory using the --copy-back option:

    MariaDB Backup writes to the data directory as the current user, which can be changed using sudo. To confirm that restored files are properly owned by the user that runs MariaDB Enterprise Server, run a command like this (adapted for the correct user/group):

    Once this is done, start MariaDB Enterprise Server:

    When the Server starts, it works from the restored data directory.

    Incremental Backup and Restore

    Full backups of large data-sets can be time-consuming and resource-intensive. MariaDB Backup supports the use of incremental backups to minimize this impact.

    While full backups are resource-intensive at time of backup, the resource burden around incremental backups occurs when preparing for restore. First, the full backup is prepared for restore, then each incremental backup is applied.

    Performing Incremental Backups

    When you perform an incremental backup, MariaDB Backup compares a previous full or incremental backup to what it finds on MariaDB Community Server. It then creates a new backup containing the incremental changes.

    Incremental backup is supported for InnoDB tables. Tables using other storage engines receive full backups even during incremental backup operations.

    To increment a full backup, use the --incremental-basedir option to indicate the path to the full backup and the --target-dir option to indicate where you want to write the incremental backup:

    In this example, MariaDB Backup reads the /data/backups/full directory, and MariaDB Enterprise Server then creates an incremental backup in the /data/backups/inc1 directory.

    Preparing an Incremental Backup

    An incremental backup must be applied to a prepared full backup before it can be used in a restore operation. If you have multiple full backups to choose from, pick the nearest full backup prior to the incremental backup that you want to restore. You may also want to back up your full-backup directory, as it are modified by the updates in the incremental data.

    If your full backup directory is not yet prepared, run this to make it consistent:

    Then, using the prepared full backup, apply the first incremental backup's data to the full backup in an incremental preparation step:

    Once the incremental backup has been applied to the full backup, the full backup directory contains the changes from the incremental backup (that is, the inc1/ directory). Feel free to remove inc1/ to save disk space.

    Restoring from Incremental Backups

    Once you have prepared the full backup directory with all the incremental changes you need (as described above), stop the MariaDB Community Server, Empty its data directory, and restore from the original full backup directory using the --copy-back option:

    MariaDB Backup writes files into the data directory using either the current user or root (in the case of a sudo operation), which may be different from the system user that runs the database. Run the following to recursively update the ownership of the restored files and directories:

    Then, start MariaDB Enterprise Server. When the Server starts, it works from the restored data directory.

    Partial Backup and Restore

    In a partial backup, MariaDB Backup copies a specified subset of tablespaces from the MariaDB Enterprise Server data directory. Partial backups are useful in establishing a higher frequency of backups on specific data, at the expense of increased recovery complexity. In selecting tablespaces for a partial backup, please consider referential integrity.

    Performing a Partial Backup

    Command-line options can be used to narrow the set of databases or tables to be included within a backup:

    Option
    Description

    --databases

    List of databases to include

    --databases-exclude

    List of databases to omit from the backup

    --databases-file

    Path to file listing the databases to include

    --tables

    List of tables to include

    --tables-exclude

    List of tables to exclude

    --tables-file

    Path to file listing the tables to include

    For example, you may wish to produce a partial backup, which excludes a specific database:

    Partial backups can also be incremental:

    Preparing a Backup Before a Partial Restore

    As with full and incremental backups, partial backups are not point-in-time consistent. A partial backup must be prepared before it can be used for recovery.

    A partial restore can be performed from a full backup or partial backup.

    The preparation step for either partial or full backup restoration requires the use of transportable tablespaces for InnoDB. As such, each prepare operation requires the --export option:

    When using a partial incremental backup for restore, the incremental data must be applied to its prior partial backup data before its data is complete. If performing partial incremental backups, run the prepare statement again to apply the incremental changes onto the partial backup that served as the base.

    Performing a Partial Restore

    Unlike full and incremental backups, you cannot restore partial backups directly using MariaDB Backup. Further, as a partial backup does not contain a complete data directory, you cannot restore MariaDB Community Server to a startable state solely with a partial backup.

    To restore from a partial backup, you need to prepare a table on the MariaDB Community Server, then manually copy the files into the data directory.

    The details of the restore procedure depend on the characteristics of the table:

    • Partial Restore Non-partitioned Tables

    • Partial Restore Partitioned Tables

    • Partial Restore of Tables with Full-Text Indexes

    As partial restores are performed while the server is running, not stopped, care should be taken to prevent production workloads during restore activity.

    Note: You can also use data from a full backup in a partial restore operation if you have prepared the data using the --export option as described above.

    Partial Restore Non-partitioned Tables

    To restore a non-partitioned table from a backup, first create a new table on MariaDB Community Server to receive the restored data. It should match the specifications of the table you're restoring.

    Be extra careful if the backup data is from a server with a different version than the restore server, as some differences (such as a differing ROW_FORMAT) can cause an unexpected result.

    1. Create an empty table for the data being restored:

    1. Modify the table to discard the tablespace:

    1. You can copy (or move) the files for the table from the backup to the data directory:

    1. Use a wildcard to include both the .ibd and .cfg files. Then, change the owner to the system user running MariaDB Community Server:

    1. Lastly, import the new tablespace:

    MariaDB Community Server looks in the data directory for the tablespace you copied in, then imports it for use. If the table is encrypted, it also looks for the encryption key with the relevant key ID that the table data specifies.

    1. Repeat this step for every table you wish to restore.

    Partial Restore Partitioned Tables

    Restoring a partitioned table from a backup requires a few extra steps compared to restoring a non-partitioned table.

    To restore a partitioned table from a backup, first create a new table on MariaDB Community Server to receive the restored data. It should match the specifications of the table you're restoring, including the partition specification.

    Be extra careful if the backup data is from a server with a different version than the restore server, as some differences (such as a differing ROW_FORMAT) can cause an unexpected result.

    1. Create an empty table for the data being restored:

    1. Then create a second empty table matching the column specification, but without partitions. This is your working table:

    1. For each partition you want to restore, discard the working table's tablespace:

    1. Then, copy the table files from the backup, using the new name:

    1. Change the owner to that of the user running MariaDB Community Server:

    1. Import the copied tablespace:

    1. Lastly, exchange the partition, copying the tablespace from the working table into the partition file for the target table:

    1. Repeat the above process for each partition until you have them all exchanged into the target table. Then delete the working table, as it's no longer necessary:

    This restores a partitioned table.

    Partial Restore of Tables with Full-Text Indexes

    When restoring a table with a full-text search (FTS) index, InnoDB may throw a schema mismatch error.

    In this case, to restore the table, it is recommended to:

    • Remove the corresponding .cfg file.

    • Restore data to a table without any secondary indexes including FTS.

    • Add the necessary secondary indexes to the restored table.

    For example, to restore table t1 with FTS index from database db1:

    1. In the MariaDB shell, drop the table you are going to restore:

    1. Create an empty table for the data being restored:

    1. Modify the table to discard the tablespace:

    1. In the operating system shell, copy the table files from the backup to the data directory of the corresponding database:

    1. Remove the .cfg file from the data directory:

    1. Change the owner of the newly copied files to the system user running MariaDB Community Server:

    1. In the MariaDB shell, import the copied tablespace:

    1. Verify that the data has been successfully restored:

    1. Add the necessary secondary indexes:

    1. The table is now fully restored:

    Point-in-Time Recoveries

    Recovering from a backup restores the data directory at a specific point-in-time, but it does not restore the binary log. In a point-in-time recovery, you begin by restoring the data directory from a full or incremental backup, then use the mysqlbinlog utility to recover the binary log data to a specific point in time.

    1. First, prepare the backup as you normally would for a full or incremental backup:

    1. When MariaDB Backup runs on a MariaDB Community Server where binary logs is enabled, it stores binary log information in the xtrabackup_binlog_info file. Consult this file to find the name of the binary log position to use. In the following example, the log position is 321.

    1. Update the configuration file to use a new data directory.

    1. Using MariaDB Backup, restore from the backup to the new data directory:

    1. Then change the owner to the MariaDB Community Server system user:

    1. Start MariaDB Community Server:

    1. Using the binary log file in the old data directory, the start position in the xtrabackup_binlog_info file, the date and time you want to restore to, and the mysqlbinlog utility to create an SQL file with the binary log changes:

    1. Lastly, run the binary log SQL to restore the databases:

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    MariaDB Enterprise Server
    MariaDB Backup
    Storage Engines and Backup Types

    Aria System Variables

    A comprehensive list of system variables for configuring Aria, including buffer sizes, log settings, and recovery options.

    This page documents system variables related to the Aria storage engine. For options that are not system variables, see Aria Options.

    See Server System Variables for instructions on setting system variables.

    aria_block_size

    • Description: Block size to be used for Aria index pages. Changing this requires dumping, deleting old tables and deleting all log files, and then restoring your Aria tables. If key lookups take too long (and one has to search roughly 8192/2 by default to find each key), can be made smaller, e.g. 4096.

    • Command line: --aria-block-size=#

    • Scope: Global

    • Dynamic: No

    • Data Type: numeric

    • Default Value: 8192

    • Range:

      • = : 4096 to 32768 in increments of 1024

      • <= : 1024 to 32768

    aria_checkpoint_interval

    • Description: Interval in seconds between automatic checkpoints. 0 means 'no automatic checkpoints' which makes sense only for testing.

    • Command line: --aria-checkpoint-interval=#

    • Scope: Global

    • Dynamic: Yes

    aria_checkpoint_log_activity

    • Description: Number of bytes that the transaction log has to grow between checkpoints before a new checkpoint is written to the log.

    • Command line: aria-checkpoint-log-activity=#

    • Scope: Global

    • Dynamic: Yes

    aria_encrypt_tables

    • Description: Enables automatic encryption of all user-created Aria tables that have the table option set to . See and .

    • Command line: aria-encrypt-tables={0|1}

    • Scope: Global

    • Dynamic: Yes

    aria_force_start_after_recovery_failures

    • Description: Number of consecutive log recovery failures after which logs are automatically deleted to cure the problem; 0 (the default) disables the feature.

    • Command line: --aria-force-start-after-recovery-failures=#

    • Scope: Global

    • Dynamic: No

    aria_group_commit

    • Description: Specifies Aria .

    • Command line: --aria_group_commit="value"

    • Alias: maria_group_commit

    • Scope: Global

    aria_group_commit_interval

    • Description: Interval between in microseconds (1/1000000 second) for other threads to come and do a commit in "hard" mode and sync()/commit at all in "soft" mode. Option only has effect if is used.

    • Command line: --aria_group_commit_interval=#

    • Alias: maria_group_commit_interval

    aria_log_dir_path

    • Description: Path to the directory where transactional log should be stored

    • Command line: --aria-log-dir-path=value

    • Scope: Global

    • Dynamic: No

    aria_log_file_size

    • Description: Limit for Aria transaction log size

    • Command line: --aria-log-file-size=#

    • Scope: Global

    • Dynamic: Yes

    aria_log_purge_type

    • Description: Specifies how the Aria transactional log are purged. Set to at_flush to keep a copy of the transaction logs (good as an extra backup). The logs will stay until the next ;

    • Command line: --aria-log-purge-type=name

    • Scope: Global

    aria_max_sort_file_size

    • Description: Don't use the fast sort index method to created index if the temporary file would get bigger than this.

    • Command line: --aria-max-sort-file-size=#

    • Scope: Global

    • Dynamic: Yes

    aria_page_checksum

    • Description: Determines whether index and data should use page checksums for extra safety. Can be overridden per table with PAGE_CHECKSUM clause in .

    • Command line: --aria-page-checksum=#

    • Scope: Global

    • Dynamic: Yes

    aria_pagecache_age_threshold

    • Description: This characterizes the number of hits a hot block has to be untouched until it is considered aged enough to be downgraded to a warm block. This specifies the percentage ratio of that number of hits to the total number of blocks in the page cache.

    • Command line: --aria-pagecache-age-threshold=#

    • Scope: Global

    • Dynamic: Yes

    aria_pagecache_buffer_size

    • Description: The size of the buffer used for index and data blocks for Aria tables. This can include explicit Aria tables, system tables, and temporary tables. Increase this to get better handling and measure by looking at (should be small) vs .

    • Command line: --aria-pagecache-buffer-size=#

    • Scope: Global

    • Dynamic: No

    aria_pagecache_division_limit

    • Description: The minimum percentage of warm blocks in the key cache.

    • Command line: --aria-pagecache-division-limit=#

    • Scope: Global

    • Dynamic: Yes

    aria_pagecache_file_hash_size

    • Description: Number of hash buckets for open and changed files. If you have many Aria files open you should increase this for faster flushing of changes. A good value is probably 1/10th of the number of possible open Aria files.

    • Command line: --aria-pagecache-file-hash-size=#

    • Scope: Global

    • Dynamic: No

    aria_pagecache_segments

    • Description: The number of segments in the page_cache. Each file is put in their own segments of size pagecache_buffer_size / segments. Having many segments improves parallel performance.

    • Command line: --aria-pagecache-segments=#

    • Scope: Global

    • Dynamic: No

    aria_recover

    • Description: aria_recover has been renamed to aria_recover_options in . See for the description.

    aria_recover_options

    • Description: Specifies how corrupted tables should be automatically repaired. More than one option can be specified, for example FORCE,BACKUP.

      • NORMAL: Normal automatic repair, the default until

      • OFF: Autorecovery is disabled, the equivalent of not using the option

    aria_repair_threads

    • Description: Number of threads to use when repairing Aria tables. The value of 1 disables parallel repair. Increasing from the default will usually result in faster repair, but will use more CPU and memory.

    • Command line: --aria-repair-threads=#

    • Scope: Global, Session

    • Dynamic: Yes

    aria_sort_buffer_size

    • Description: The buffer that is allocated when sorting the index when doing a or when creating indexes with or .

    • Command line: --aria-sort-buffer-size=#

    • Scope: Global, Session

    • Dynamic: Yes

    aria_stats_method

    • Description: Determines how NULLs are treated for Aria index statistics purposes. If set to nulls_equal, all NULL index values are treated as a single group. This is usually fine, but if you have large numbers of NULLs the average group size is slanted higher, and the optimizer may miss using the index for ref accesses when it would be useful. If set to nulls_unequal, the default, the opposite approach is taken, with each NULL forming its own group of one. Conversely, the average group size is slanted lower, and the optimizer may use the index for ref accesses when not suitable. Setting to nulls_ignored ignores NULLs altogether from index group calculations. Statistics need to be recalculated after this method is changed. See also , and .

    • Command line: --aria-stats-method=#

    aria_sync_log_dir

    • Description: Controls syncing directory after log file growth and new file creation.

    • Command line: --aria-sync-log-dir=#

    • Scope: Global

    • Dynamic: Yes

    aria_used_for_temp_tables

    • Description: Readonly variable indicating whether the storage engine is used for temporary tables. If set to ON, the default, the Aria storage engine is used. If set to OFF, MariaDB reverts to using for on-disk temporary tables. The storage engine is used for temporary tables regardless of this variable's setting where appropriate. The default can be changed by not using the --with-aria-tmp-tables option when building MariaDB.

    • Command line: No

    • Scope: Global

    deadlock_search_depth_long

    • Description: Long search depth for the . Only used by the storage engine.

    • Command line: --deadlock-search-depth-long=#

    • Scope: Global, Session

    • Dynamic: Yes

    deadlock_search_depth_short

    • Description: Short search depth for the . Only used by the storage engine.

    • Command line: --deadlock-search-depth-short=#

    • Scope: Global, Session

    • Dynamic: Yes

    deadlock_timeout_long

    • Description: Long timeout in microseconds for the . Only used by the storage engine.

    • Command line: --deadlock-timeout-long=#

    • Scope: Global, Session

    • Dynamic: Yes

    deadlock_timeout_short

    • Description: Short timeout in microseconds for the . Only used by the storage engine.

    • Command line: --deadlock-timeout-short=#

    • Scope: Global, Session

    • Dynamic: Yes

    This page is licensed: CC BY-SA / Gnu FDL

    Aria FAQ

    Frequently asked questions about the Aria storage engine, covering its history, comparison with MyISAM, and key features like crash safety.

    This FAQ provides information on the Aria storage engine.

    The Aria storage engine was previously known as Maria, (see, the Aria Name). In current releases of MariaDB, you can refer to the engine as Maria or Aria. As this will change in future releases, please update references in your scripts and automation to use the correct name.

    What is Aria?

    Aria is a storage engine for MySQL® and MariaDB. It was originally developed with the goal of becoming the default transactional and non-transactional storage engine for MariaDB and MySQL.

    It has been in development since 2007 and was first announced on Monty's . The same core MySQL engineers who developed the MySQL server and the , , and storage engines are also working on Aria.

    Why is the engine called Aria?

    Originally, the storage engine was called Maria, after Monty's younger daughter. Monty named MySQL after his first child, My and his second child Max gave his name to MaxDB and the MySQL-Max distributions.

    In practice, having both MariaDB the database server and Maria the storage engine with such similar names proved confusing. To mitigate this, the decision was made to change the name. A Rename Maria contest was held during the first half of 2010 and names were submitted from around the world. Monty picked the name Aria from a short list of finalist. Chris Tooley, who suggested it, received the prize of a Linux-powered from Monty Program.

    For more information, see the .

    What's the goal for the current version?

    The current version of Aria is 1.5. The goal of this release is to develop a crash-safe alternative to MyISAM. That is, when MariaDB restarts after a crash, Aria recovers all tables to the state as of the start of a statement or at the start of the last LOCK TABLES statement.

    The current goal is to keep the code stable and fix all bugs.

    What's the goal for the next version?

    The next version of Aria is 2.0. The goal for this release is to develop a fully transactional storage engine with at least all the major features of InnoDB.

    Currently, Aria 2.0 is on hold as its developers are focusing on improving MariaDB. However, they are interested in working with interested customers and partners to add more features to Aria and eventually release 2.0.

    These are some of the goals for Aria 2.0:

    • ACID compliant

    • Commit/Rollback

    • Concurrent updates/deletes

    • Row locking

    Beginning in Aria 2.5, the plan is to focus on improving performance.

    What is the ultimate goal of Aria?

    Long term, we have the following goals for Aria:

    • To create a new, ACID and Multi-Version Concurrency Control (MVCC), transactional storage engine that can function as both the default non-transactional and transactional storage engine for MariaDB and MySQL®.

    • To be a MyISAM replacement. This is possible because Aria can also be run in non-transactional mode, supports the same row formats as MyISAM, and supports or will support all major features of MyISAM.

    • To be the default non-transactional engine in MariaDB (instead of MyISAM).

    What are the design goals in Aria?

    • Multi-Version Concurrency Control (MVCC) and ACID storage engine.

    • Optionally non-transactional tables that should be 'as fast and as compact' as MyISAM tables.

    • Be able to use Aria for internal temporary tables in MariaDB (instead of MyISAM).

    • All indexes should have equal speed (clustered index is not on our current road map for Aria. If you need clustered index, you should use XtraDB).

    Where can I find documentation and help about Aria?

    Documentation is available at and related topics. The project is maintained on .

    If you want to know what happens or be part of developing Aria, you can subscribe to the , , or mailing lists.

    To report and check bugs in Aria, see .

    You can usually find some of the Maria developers on our Zulip instance at or on the IRC channel #maria at.

    Who develops Aria?

    The Core Team who develop Aria are:

    Technical lead

    • Michael "Monty" Widenius - Creator of MySQL and MyISAM

    Core Developers (in alphabetical order)

    • Guilhem Bichot - Replication expert, on line backup for MyISAM, etc.

    • Kristian Nielsen - MySQL build tools, NDB, MySQL server

    • Oleksandr Byelkin - Query cache, sub-queries, views.

    • Sergei Golubchik - Server Architect, Full text search, keys for MyISAM-Merge, Plugin architecture, etc.

    All except Guilhem Bichot are working for .

    What is the release policy/schedule of Aria?

    Aria follows the same as for . Some clarifications, unique for the Aria storage engine:

    • Aria index and data file formats should be backwards and forwards compatible to ensure easy upgrades and downgrades.

    • The format should also be compatible, but we don't make any guarantees yet. In some cases when upgrading, you must remove the old aria_log.% and maria_log.% files before restarting MariaDB. (So far, this has only occurred in the upgrade from and ).

    Extended commitment for Beta 1.5

    • Aria is now feature complete according to specification.

    How does Aria 1.5 Compare to MyISAM?

    Aria 1.0 was basically a crash-safe non-transactional version of MyISAM. Aria 1.5 added more concurrency (multiple inserter) and some optimizations.

    Aria supports all aspects of MyISAM, except as noted below. This includes external and internal check/repair/compressing of rows, different row formats, different index compress formats, etc. After a normal shutdown you can copy Aria files between servers.

    Advantages of Aria compared to MyISAM

    • Data and indexes are crash safe.

    • On a crash, changes are rolled back to state of the start of a statement or a last LOCK TABLES statement.

    • Aria can replay almost everything from the log. (Including CREATE, DROP, RENAME, TRUNCATE

    Differences between Aria and MyISAM

    • Aria uses BIG (1G by default) .

    • Aria has a log control file (aria_log_control) and log files (aria_log.%). The log files can be automatically purged when not needed or purged on demand (after backup).

    • Aria uses 8K pages by default (MyISAM uses 1K). This makes Aria a bit faster when using keys of fixed size, but slower when using variable-length packed keys (until we add a directory to index pages).

    Disadvantages of Aria compared to MyISAM

    • Aria doesn't support INSERT DELAYED.

    • Aria does not support multiple key caches.

    • Storage of very small rows (< 25 bytes) are not efficient for PAGE format.

    • MERGE

    Differences between release and the normal MySQL-5.1 release?

    See:

    Why do you use the TRANSACTIONAL keyword now when Aria is not yet transactional?

    In the current development phase Aria tables created with TRANSACTIONAL=1 are crash safe and atomic but not transactional because changes in Aria tables can't be rolled back with the ROLLBACK command. As we planned to make Aria tables fully transactional, we decided it was better to use the TRANSACTIONAL keyword from the start so that applications don't need to be changed later.

    What are the known problems with the MySQL-5.1-Maria release?

    • See KNOWN_BUGS.txt for open/design bugs.

    • See jira.mariadb.org for newly reported bugs. Please report anything you can't find here!

    • If there is a bug in the Aria recovery code or in the code that generates the logs, or if the logs become corrupted, then mysqld may fail to start because Aria can't execute the logs at start up.

    • Query cache and concurrent insert using page row format have a bug, please disable query cache while using page row format and

    If Aria doesn't start or you have an unrecoverable table (shouldn't happen):

    • Remove the aria_log.% files from the data directory.

    • Restart mysqld and run , or on your Aria tables.

    Alternatively,

    • Remove logs and run on your *.MAI files.

    What is going to change in later Aria main releases?

    The LOCK TABLES statement will not start a crash-safe segment. You should use begin and instead.

    To make things future safe, you could do this:

    And later you can just remove the LOCK TABLES and UNLOCK TABLES statements.

    How can I create a MyISAM-like (non-transactional) table in Aria?

    Example:

    Note that the rows are not cached in the page cache for FIXED or DYNAMIC format. If you want to have the data cached (something MyISAM doesn't support) you should use ROW_FORMAT=PAGE:

    You can use PAGE_CHECKSUM=1 also for non-transactional tables; This puts a page checksums on all index pages. It also puts a checksum on data pages if you use ROW_FORMAT=PAGE.

    You may still have a speed difference (may be slightly positive or negative) between MyISAM and Aria because of different page sizes. You can change the page size for MariaDB with --aria-block-size=\#, where \

    is 1024, 2048, 4096, 8192, 16384 or 32768.

    Note that if you change the page size you have to dump all your old tables into text (with ) and remove the old Aria log and files:

    What are the advantages/disadvantages of the new PAGE format compared to the old MyISAM-like row formats (DYNAMIC and FIXED)

    The MyISAM-like DYNAMIC and FIXED format are extremely simple and have very little space overhead, so it's hard to beat them for when it comes to simple scanning of unmodified data. The DYNAMIC format does however get notably worse over time if you update the row a lot in a manner that increases the size of the row.

    The advantages of the PAGE format (compared to DYNAMIC or FIXED) for non-transactional tables are:

    • It's cached by the Page Cache, which gives better random performance (as it uses less system calls).

    • Does not fragment as easily as the DYNAMIC format during UPDATE statements. The maximum number of fragments are very low.

    • Code can easily be extended to only read the accessed columns (for example to skip reading blobs).

    The disadvantages are:

    • Slight storage overhead (should only be notable for very small row sizes)

    • Slower full table scan time.

    • When using row_format=PAGE, (the default), Aria first writes the row, then the keys, at which point the check for duplicate keys happens. This makes PAGE format slower than DYNAMIC (or MyISAM) if there is a lot of duplicated keys because of the overhead of writing and removing the row. If this is a problem, you can use row_format=DYNAMIC

    What's the proper way to copy a Aria table from one place to another?

    An Aria table consists of 3 files:

    It's safe to copy all the Aria files to another directory or MariaDB instance if any of the following holds:

    • If you shutdown the MariaDB Server properly with , so that there is nothing for Aria to recover when it starts.

    or

    • If you have run a statement and not accessed the table using SQL from that time until the tables have been copied.

    In addition, you must adhere the following rule for transactional tables:

    You can't copy the table to a location within the same MariaDB server if the new table has existed before and the new table is still active in the Aria recovery log (that is, Aria may need to access the old data during recovery). If you are unsure whether the old name existed, run on the table before you use it.

    After copying a transactional table and before you use the table, we recommend that you run the command:

    This will overwrite all references to the logs (LSN), all transactional references (TRN) and all unused space with 0. It also marks the table as 'movable'. An additional benefit of zerofill is that the Aria files will compress better. No real data is ever removed as part of zerofill.

    Aria will automatically notice if you have copied a table from another system and do 'zerofill' for the first access of the table if it was not marked as 'movable'. The reason for using is that you avoid a delay in the MariaDB server for the first access of the table.

    Note that this automatic detection doesn't work if you copy tables within the same MariaDB server!

    When is it safe to remove old log files?

    If you want to remove the (aria_log.%) with rm or delete, then you must first shut down MariaDB cleanly (for example, with ) before deleting the old files.

    The same rules apply when upgrading MariaDB; When upgrading, first take down MariaDB in a clean way and then upgrade. This will allow you to remove the old log files if there are incompatible problems between releases.

    Don't remove the aria_log_control file! This is not a log file, but a file that contains information about the Aria setup (current transaction id, unique id, next log file number etc.).

    If you do, Aria will generate a new aria_log_control file at startup and will regard all old Aria files as files moved from another system. This means that they have to be 'zerofilled' before they can be used. This will happen automatically at next access of the Aria files, which can take some time if the files are big.

    If this happens, you will see things like this in your mysqld.err file:

    As part of zerofilling no vital data is removed.

    How does one solve the Missing valid id error?

    See for details.

    This page is licensed: CC BY-SA / Gnu FDL

    sudo yum install MariaDB-backup
    sudo apt-get install mariadb-backup
    sudo zypper install MariaDB-backup
    mariadb-backup <options>
    CREATE USER 'mariadb-backup'@'localhost' IDENTIFIED BY 'mypassword';
    GRANT RELOAD, PROCESS, LOCK TABLES, BINLOG MONITOR ON *.* TO 'mariadb-backup'@'localhost';
    CREATE USER 'mariadb-backup'@'localhost' IDENTIFIED BY 'mypassword';
    GRANT RELOAD, PROCESS, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'mariadb-backup'@'localhost';
    GRANT SELECT, INSERT, CREATE, ALTER ON mysql.mariadb_backup_history TO 'mariadb-backup'@'localhost';
    GRANT SELECT, INSERT, CREATE, ALTER ON PERCONA_SCHEMA.xtrabackup_history TO 'mariadb-backup'@'localhost';
    GRANT DROP, ALTER, RENAME ON PERCONA_SCHEMA.xtrabackup_history TO 'mariadb-backup'@'localhost';
    GRANT CREATE ON PERCONA_SCHEMA TO 'mariadb-backup'@'localhost';
    RENAME TABLE PERCONA_SCHEMA.xtrabackup_history TO mysql.mariadb_backup_history;
    ALTER TABLE mysql.mariadb_backup_history ENGINE=InnoDB;
    mariadb-backup --backup \
       --target-dir=/var/mariadb/backup/ \
       --user=mariadb-backup --password=mypassword
    [mariadb-backup]
    user=mariadb-backup
    password=mypassword
    2019-02-12 09:48:38 7ffff7fdb820  InnoDB: Operating system error number 23 in a file operation.
    InnoDB: Error number 23 means 'Too many open files in system'.
    InnoDB: Some operating system error numbers are described at
    InnoDB: http://dev.mysql.com/doc/refman/5.6/en/operating-system-error-codes.html
    InnoDB: Error: could not open single-table tablespace file ./db1/tab1.ibd
    InnoDB: We do not continue the crash recovery, because the table may become
    InnoDB: corrupt if we cannot apply the log records in the InnoDB log to it.
    InnoDB: To fix the problem and start mysqld:
    InnoDB: 1) If there is a permission problem in the file and mysqld cannot
    InnoDB: open the file, you should modify the permissions.
    InnoDB: 2) If the table is not needed, or you can restore it from a backup,
    InnoDB: then you can remove the .ibd file, and InnoDB will do a normal
    InnoDB: crash recovery and ignore that table.
    InnoDB: 3) If the file system or the disk is broken, and you cannot remove
    InnoDB: the .ibd file, you can set innodb_force_recovery > 0 in my.cnf
    InnoDB: and force InnoDB to continue crash recovery here.
    [mariadb-backup]
    open_files_limit=65535
    mysql soft nofile 65535
    mysql hard nofile 65535
    ulimit -Sn
    65535
    ulimit -Hn
    65535
    CREATE USER 'mariadb-backup'@'localhost'
    IDENTIFIED BY 'mbu_passwd';
    
    GRANT RELOAD, PROCESS, LOCK TABLES, BINLOG MONITOR
    ON *.*
    TO 'mariadb-backup'@'localhost';
    CREATE USER 'mariadb-backup'@'localhost'
    IDENTIFIED BY 'mbu_passwd';
    
    GRANT RELOAD, PROCESS, LOCK TABLES, REPLICATION CLIENT
    ON *.*
    TO 'mariadb-backup'@'localhost';
    sudo mariadb-backup --backup \
          --target-dir=/data/backups/full \
          --user=mariadb-backup \
          --password=mbu_passwd
    sudo mariadb-backup --prepare \
       --use-memory=34359738368 \
       --target-dir=/data/backups/full
    mariadb-backup --copy-back --target-dir=/data/backups/full
    chown -R mysql:mysql /var/lib/mysql
    sudo systemctl start mariadb
    mariadb-backup --backup \
          --incremental-basedir=/data/backups/full \
          --target-dir=/data/backups/inc1 \
          --user=mariadb-backup \
          --password=mbu_passwd
    mariadb-backup --prepare --target-dir=/data/backups/full
    mariadb-backup --prepare \
          --target-dir=/data/backups/full \
          --incremental-dir=/data/backups/inc1
    mariadb-backup --copy-back --target-dir=/data/backups/full
    chown -R mysql:mysql /var/lib/mysql
    mariadb-backup --backup \
          --target-dir=/data/backups/part \
          --user=mariadb-backup \
          --password=mbu_passwd \
          --database-exclude=test
    mariadb-backup --backup \
          --incremental-basedir=/data/backups/part \
          --target-dir=/data/backups/part_inc1 \
          --user=mariadb-backup \
          --password=mbu_passwd  \
          --database-exclude=test
    mariadb-backup --prepare --export --target-dir=/data/backups/part
    mariadb-backup --prepare --export \
          --target-dir=/data/backups/part \
          --incremental-dir=/data/backups/part_inc1
    CREATE TABLE test.address_book (
       id INT PRIMARY KEY AUTO_INCREMENT,
       name VARCHAR(255),
       email VARCHAR(255));
    ALTER TABLE test.address_book DISCARD TABLESPACE;
    # cp /data/backups/part_inc1/test/address_book.* /var/lib/mysql/test
    # chown mysql:mysql /var/lib/mysql/test/address_book.*
    ALTER TABLE test.address_book IMPORT TABLESPACE;
    CREATE TABLE test.students (
       id INT PRIMARY KEY AUTO_INCREMENT
       name VARCHAR(255),
       email VARCHAR(255),
       graduating_year YEAR)
    PARTITION BY RANGE (graduating_year) (
       PARTITION p9 VALUES LESS THAN 2019
       PARTITION p1 VALUES LESS THAN MAXVALUE
    );
    CREATE TABLE test.students_work AS
    SELECT * FROM test.students WHERE NULL;
    ALTER TABLE test.students_work DISCARD TABLESPACE;
    # cp /data/backups/part_inc1/test/students.ibd /var/lib/mysql/test/students_work.ibd
    # cp /data/backups/part_inc1/test/students.cfg /var/lib/mysql/test/students_work.cfg
    # chown mysql:mysql /var/lib/mysql/test/students_work.*
    ALTER TABLE test.students_work IMPORT TABLESPACE;
    ALTER TABLE test.students EXCHANGE PARTITION p0 WITH TABLE test.students_work;
    DROP TABLE test.students_work;
    DROP TABLE IF EXISTS db1.t1;
    CREATE TABLE db1.t1(f1 CHAR(10)) ENGINE=INNODB;
    ALTER TABLE db1.t1 DISCARD TABLESPACE;
    $ sudo cp /data/backups/part/db1/t1.* /var/lib/mysql/db1
    $ sudo rm /var/lib/mysql/db1/t1.cfg
    $ sudo chown mysql:mysql /var/lib/mysql/db1/t1.*
    ALTER TABLE db1.t1 IMPORT TABLESPACE;
    SELECT * FROM db1.t1;
    +--------+
    | f1     |
    +--------+
    | ABC123 |
    +--------+
    ALTER TABLE db1.t1 FORCE, ADD FULLTEXT INDEX f_idx(f1);
    SHOW CREATE TABLE db1.t1\G
    *************************** 1. row ***************************
           Table: t1
    Create Table: CREATE TABLE `t1` (
      `f1` char(10) DEFAULT NULL,
      FULLTEXT KEY `f_idx` (`f1`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci
    mariadb-backup --prepare --target-dir=/data/backups/full
    cat /data/backups/full/xtraback_binlog_info
    
    mariadb-node4.00001     321
    [mysqld]
    datadir=/var/lib/mysql_new
    mariadb-backup --copy-back --target-dir=/data/backups/full
    chown -R mysql:mysql /var/lib/mysql_new
    sudo systemctl start mariadb
    mysqlbinlog --start-position=321 \
          --stop-datetime="2019-06-28 12:00:00" \
          /var/lib/mysql/mariadb-node4.00001 \
          > mariadb-binlog.sql
    mysql -u root -p < mariadb-binlog.sql

    WEBINAR

    MariaDB 101: Learning the Basics of MariaDB

    Watch Now

    Cover

    WEBINAR

    MariaDB 101: Learning the Basics of MariaDB

    Watch Now

    Cover

    WEBINAR

    MariaDB 101: Learning the Basics of MariaDB

    Watch Now

    Cover

    WEBINAR

    MariaDB 101: Learning the Basics of MariaDB

    Watch Now

    Cover

    WEBINAR

    MariaDB 101: Learning the Basics of MariaDB

    Watch Now

    Cover
    in increments of
    1024

    Data Type: numeric

  • Default Value: 30

  • Range: 0 to 4294967295

  • Data Type: numeric

  • Default Value: 1048576

  • Range 0 to 4294967295

  • Data Type: boolean

  • Default Value: OFF

  • Data Type: numeric

  • Default Value: 0

  • Dynamic: No

  • Data Type: string

  • Valid values:

    • none - Group commit is disabled.

    • hard - Wait the number of microseconds specified by aria_group_commit_interval before actually doing the commit. If the interval is 0 then just check if any other threads have requested a commit during the time this commit was preparing (just before sync() file) and send their data to disk also before sync().

    • soft - The service thread will wait the specified time and then sync() to the log. If the interval is 0 then it won't wait for any commits (this is dangerous and should generally not be used in production)

  • Default Value: none

  • Scope: Global
  • Dynamic: No

  • Type: numeric

  • Valid Values:

    • Default Value: 0 (no waiting)

    • Range: 0-4294967295

  • Data Type: string

  • Default Value: Same as DATADIR

  • Introduced: , MariaDB 10.6.13, MariaDB 10.11.3 (as a system variable, existed as an option only before that)

  • Data Type: numeric

  • Default Value: 1073741824

  • Dynamic: Yes
  • Data Type: enumeration

  • Default Value: immediate

  • Valid Values: immediate, external, at_flush

  • Data Type: numeric

  • Default Value: 9223372036853727232

  • Range: 0 to 9223372036854775807

  • Data Type: boolean

  • Default Value: ON

  • Data Type: numeric

  • Default Value: 300

  • Range: 100 to 9999900

  • Data Type: numeric

  • Default Value: 134217728 (128MB)

  • Range: 131072 (128KB) upwards

  • Data Type: numeric

  • Default Value: 100

  • Range: 1 to 100

  • Data Type: numeric

  • Default Value: 512

  • Range: 128 to 16384

  • Data Type: numeric

  • Default Value: 1

  • Range: 1 to 128

  • Introduced: MariaDB Community Server 12.1, ,

  • QUICK: Does not check rows in the table if there are no delete blocks.

  • FORCE: Runs the recovery even if it determines that more than one row from the data file are lost.

  • BACKUP: Keeps a backup of the data files.

  • Command line: --aria-recover-options[=#]

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enumeration

  • Default Value:

    • BACKUP,QUICK (>= )

    • NORMAL (<= )

  • Valid Values: NORMAL, BACKUP, FORCE, QUICK, OFF

  • Introduced:

  • Data Type: numeric

  • Default Value: 1

  • Data Type: numeric

  • Default Value: 268434432

  • Scope: Global, Session

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: nulls_unequal

  • Valid Values: nulls_equal, nulls_unequal, nulls_ignored

  • Data Type: enumeration

  • Default Value: NEWFILE

  • Valid Values: NEWFILE, NEVER, ALWAYS

  • Dynamic: No

  • Data Type: boolean

  • Default Value: ON

  • Data Type: numeric

  • Default Value: 15

  • Range: 0 to 33

  • Data Type: numeric

  • Default Value: 4

  • Range: 0 to 32

  • Data Type: numeric

  • Default Value: 50000000

  • Range: 0 to 4294967295

  • Data Type: numeric

  • Default Value: 10000

  • Range: 0 to 4294967295

  • ROW_FORMAT
    PAGE
    Data at Rest Encryption
    Enabling Encryption for User-created Tables
    group commit mode
    Aria group commits
    aria_group_commit
    FLUSH LOGS
    CREATE TABLE
    aria-status-variables/#aria_pagecache_reads
    aria-status-variables/#aria_pagecache_read_requests
    aria_recover_options
    REPAIR
    CREATE INDEX
    ALTER TABLE
    Index Statistics
    myisam_stats_method
    innodb_stats_method
    Aria
    MyISAM
    MEMORY
    two-step deadlock detection
    Aria
    two-step deadlock detection
    Aria
    two-step deadlock detection
    Aria
    two-step deadlock detection
    Aria
    Group commit (Already in )
  • Faster lookup in index pages (Page directory)

  • Allow 'any' length transactions to work (Having long running transactions will cause more log space to be used).

  • Allow log shipping; that is, you can do incremental backups of Aria tables just by copying the Aria logs.

  • Allow copying of Aria tables between different Aria servers (under some well-defined constraints).

  • Better blob handling (than is currently offered in MyISAM, at a minimum).

  • No memory copying or extra memory used for blobs on insert/update.

  • Blobs allocated in big sequential blocks - Less fragmentation over time.

  • Blobs are stored so that Aria can easily be extended to have access to any part of a blob with a single fetch in the future.

  • Efficient storage on disk (that is, low row data overhead, low page data overhead and little lost space on pages). Note: There is still some more work to succeed with this goal. The disk layout is fine, but we need more in-memory caches to ensure that we get a higher fill factor on the pages.

  • Small footprint, to make MariaDB + Aria suitable for desktop and embedded applications.

  • Flexible memory allocation and scalable algorithms to utilize large amounts of memory efficiently, when it is available.

  • tables). Therefore, you make a backup of Aria by just copying the log. The things that can't be replayed (yet) are:
    • Batch INSERT into an empty table (This includes LOAD DATA INFILE, SELECT... INSERT and INSERT (many rows)).

    • ALTER TABLE. Note that .frm tables are NOT recreated!

  • LOAD INDEX can skip index blocks for unwanted indexes.

  • Supports all MyISAM ROW formats and new PAGE format where data is stored in pages. (default size is 8K).

  • Multiple concurrent inserters into the same table.

  • When using PAGE format (default) row data is cached by page cache.

  • Aria has unit tests of most parts.

  • Supports both crash-safe (soon to be transactional) and not transactional tables. (Non-transactional tables are not logged and rows uses less space): CREATE TABLE foo (...) TRANSACTIONAL=0|1 ENGINE=Aria.

  • PAGE is the only crash-safe/transactional row format.

  • PAGE format should give a notable speed improvement on systems which have bad data caching. (For example Windows).

  • From , max key length is 2000 bytes, compared to 1000 bytes in MyISAM.

  • tables don't support Aria (should be very easy to add later).
  • Aria data pages in block format have an overhead of 10 bytes/page and 5 bytes/row. Transaction and multiple concurrent-writer support will use an extra overhead of 7 bytes for new rows, 14 bytes for deleted rows and 0 bytes for old compacted rows.

  • No external locking (MyISAM has external locking, but this is a rarely used feature).

  • Aria has one page size for both index and data (defined when Aria is used the first time). MyISAM supports different page sizes per index.

  • Small overhead (15 bytes) per index page.

  • Aria doesn't support MySQL internal RAID (disabled in MyISAM too, it's a deprecated feature).

  • Minimum data file size for PAGE format is 16K (with 8K pages).

  • Aria doesn't support indexes on virtual fields.

  • isn't complete
    Faster updates (compared to DYNAMIC).
    to get same behavior as MyISAM.
    blog
    MyISAM
    MERGE
    MEMORY
    System 76 Meerkat NetTop
    Aria Name
    Aria
    GitHub
    developers
    docs
    discuss
    mariadb.zulipchat.com
    MariaDB Corporation Ab
    MariaDB
    log file
    aria_chk
    log files
    Aria storage engine
    CHECK TABLE
    REPAIR TABLE
    mariadb-check
    aria_chk
    COMMIT
    mariadb-dump
    mariadb-admin shutdown
    FLUSH TABLES
    aria_chk --zerofill
    aria_chk --zerofill
    Aria log files
    mariadb-admin shutdown
    Aria Log Files
    MDEV-6817

    Using CONNECT - Partitioning and Sharding

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    CONNECT supports the MySQL/MariaDB partition specification. It is done similar to the way or do by using the PARTITION engine that must be enabled for this to work. This type of partitioning is sometimes referred as “horizontal partitioning”.

    Partitioning enables you to distribute portions of individual tables across a file system according to rules which you can set largely as needed. In effect, different portions of a table are stored as separate tables in different locations. The user-selected rule by which the division of data is accomplished is known as a partitioning function, which in MariaDB can be the modulus, simple matching against a set of ranges or value lists, an internal hashing function, or a linear hashing function.

    CONNECT takes this notion a step further, by providing two types of partitioning:

    BEGIN;
    LOCK TABLES ....
    UNLOCK TABLES;
    COMMIT;
    CREATE TABLE t1 (a INT) ROW_FORMAT=FIXED TRANSACTIONAL=0 PAGE_CHECKSUM=0;
    CREATE TABLE t2 (a INT) ROW_FORMAT=DYNAMIC TRANSACTIONAL=0 PAGE_CHECKSUM=0;
    SHOW CREATE TABLE t1;
    SHOW CREATE TABLE t2;
    CREATE TABLE t3 (a INT) ROW_FORMAT=PAGE TRANSACTIONAL=0 PAGE_CHECKSUM=0;
    SHOW CREATE TABLE t3;
    # rm datadir/aria_log*
    XXX.frm : The definition for the table, used by MySQL.
    XXX.MYI : Aria internal information about the structure of the data and index and data for all indexes.
    XXX.MAD : The data.
    $ aria_chk --zerofill table_name
    [Note] Zerofilling moved table: '.\database\xxxx'

    File partitioning. Each partition is stored in a separate file like in multiple tables.

  • Table partitioning. Each partition is stored in a separate table like in TBL tables.

  • Partition engine issues

    Using partitions sometimes requires creating the tables in an unnatural way to avoid some error due to several partition engine bugs:

    1. Engine specific column and index options are not recognized and cause a syntax error when the table is created. The workaround is to create the table in two steps, a CREATE TABLE statement followed by an ALTER TABLE statement.

    2. The connection string, when specified for the table, is lost by the partition engine. The workaround is to specify the connection string in the option_list.

    3. MySQL upstream bug #71095. In case of list columns partitioning it sometimes causes a false “impossible where” clause to be raised. This makes a wrong void result returned when it should not be void. There is no workaround but this bug should be hopefully fixed.

    The following examples are using the above workaround syntax to address these issues.

    File Partitioning

    File partitioning applies to file-based CONNECT table types. As with multiple tables, physical data is stored in several files instead of just one. The differences to multiple tables are:

    1. Data is distributed amongst the different files following the partition rule.

    2. Unlike multiple tables, partitioned tables are not read only.

    3. Unlike multiple tables, partitioned tables can be indexable.

    4. The file names are generated from the partition names.

    5. Query pruning is automatically made by the partition engine.

    The table file names are generated differently depending on whether the table is an inward or outward table. For inward tables, for which the file name is not specified, the partition file names are:

    For instance for the table:

    CONNECT will generate in the current data directory the files:

    This is similar to what the partition engine does for other engines - CONNECT partitioned inward tables behave like other engines partition tables do. Just the data format is different.

    Note: If sub-partitioning is used, inward table files and index files are named:

    Outward Tables

    The real problems occur with outward tables, in particular when they are created from already existing files. The first issue is to make the partition table use the correct existing file names. The second one, only for already existing not void tables, is to be sure the partitioning function match the distribution of the data already existing in the files.

    The first issue is addressed by the way data file names are constructed. For instance let us suppose we want to make a table from the fixed formatted files:

    This can be done by creating a table such as:

    The rule is that for each partition the matching file name is internally generated by replacing in the given FILE _ NAME option value the “%s” part by the partition name.

    If the table was initially void, further inserts will populate it according to the partition function. However, if the files did exist and contained data, this is your responsibility to determine what partition function actually matches the data distribution in them. This means in particular that partitioning by key or by hash cannot be used (except in exceptional cases) because you have almost no control over what the used algorithm does.

    In the example above, there is no problem if the table is initially void, but if it is not, serious problems can be met if the initial distribution does not match the table distribution. Supposing a row in which “id” as the value 12 was initially contained in the part1.txt file, it are seen when selecting the whole table but if you ask:

    The result will have 0 rows. This is because according to the partition function query pruning will only look inside the second partition and will miss the row that is in the wrong partition.

    One way to check for wrong distribution if for instance to compare the results from queries such as:

    And

    If they match, the distribution can be correct although this does not prove it. However, if they do not match, the distribution is surely wrong.

    Partitioning on a Special Column

    There are some cases where the files of a multiple table do not contain columns that can be used for range or list partitioning. For instance, let’s suppose we have a multiple table based on the following files:

    Each of them containing the same kind of data:

    A multiple table can be created on them, for instance by:

    The issue is that if we want to create a partitioned table on these files, there are no columns to use for defining a partition function. Each city file can have the same kind of column values and there is no way to distinguish them.

    However, there is a solution. It is to add to the table a special column that are used by the partition function. For instance, the new table creation can be done by:

    Note 1: we had to do it in two steps because of the column CONNECT options.

    Note 2: the special column PARTID returns the name of the partition in which the row is located.

    Note 3: here we could have used the FNAME special column instead because the file name is specified as being the partition name.

    This may seem rather stupid because it means for instance that a row are in partition boston if it belongs to the partition boston! However, it works because the partition engine doesn’t know about special columns and behaves as if the city column was a real column.

    What happens if we populate it by?

    The value given for the city column (explicitly or by default) are used by the partition engine to decide in which partition to insert the rows. It are ignored by CONNECT (a special column cannot be given a value) but later will return the matching value. For instance:

    This query returns:

    city
    first_name
    job

    boston

    Johnny

    RESEARCH

    chicago

    Jim

    SALES

    Everything works as if the city column was a real column contained in the table data files.

    Partitioning of zipped tables

    Two cases are currently supported: If a table is based on several zipped files, portioning is done the standard way as above. This is the file_name option specifying the name of the zip files that shall contain the ‘%s’ part used to generate the file names. If a table is based on only one zip file containing several entries, this is indicated by placing the ‘%s’ part in the entry option value. Note: If a table is based on several zipped files each containing several entries, only the first case is possible. Using sub-partitioning to make partitions on each entries is not supported yet.

    Table Partitioning

    With table partitioning, each partition is physically represented by a sub-table. Compared to standard partitioning, this brings the following features:

    1. The partitions can be tables driven by different engines. This relieves the current existing limitation of the partition engine.

    2. The partitions can be tables driven by engines not currently supporting partitioning.

    3. Partition tables can be located on remote servers, enabling table sharding.

    4. Like for TBL tables, the columns of the partition table do not necessarily match the columns of the sub-tables.

    The way it is done is to create the partition table with a table type referring to other tables, PROXY,MYSQL ODBC or JDBC. Let us see how this is done on a simple example. Supposing we have created the following tables:

    We can for instance create a partition table using these tables as physical partitions by:

    Here the name of each partition sub-table are made by replacing the ‘%s’ part of the tabname option value by the partition name. Now if we do:

    The rows are distributed in the different sub-tables according to the partition function. This can be seen by executing the query:

    This query replies:

    partition_name
    table_rows

    1

    4

    2

    4

    3

    3

    Query pruning is of course automatic, for instance:

    This query replies:

    id
    select_type
    table
    partitions
    type
    possible_keys
    key
    key_len
    ref
    rows
    Extra

    1

    SIMPLE

    part5

    3

    When executing this select query, only sub-table xt3 are used.

    Indexing with Table Partitioning

    Using the PROXY table type seems natural. However, in this current version, the issue is that PROXY (and ODBC) tables are not indexable. This is why, if you want the table to be indexed, you must use the MYSQL table type. The CREATE TABLE statement are almost the same:

    The column id is declared as a key, and the table type is now MYSQL. This makes Sub-tables accessed by calling a MariaDB server as MYSQL tables do. Note that this modifies only the way CONNECT sub-tables are accessed.

    However, indexing just make the partitioned table use “remote indexing” the way FEDERATED tables do. This means that when sending the query to retrieve the table data, a where clause are added to the query. For instance, let’s suppose you ask:

    The query sent to the server are:

    On a query like this one, it does not change much because the where clause could have been added anyway by the cond_push function, but it does make a difference in case of joins. The main thing to understand is that real indexing is done by the called table and therefore that it should be indexed.

    This also means that the xt1, xt2, and xt3 table indexes should be made separately because creating the t2 table as indexed does not make the indexes on the sub-tables.

    Sharding with Table Partitioning

    Using table partitioning can have one more advantage. Because the sub-tables can address a table located on another server, it is possible to shard a table on separate servers and hardware machines. This may be required to access as one table data already located on several remote machines, such as servers of a company branches. Or it can be just used to split a huge table for performance reason. For instance, supposing we have created the following tables:

    Creating the partition table accessing all these are almost like what we did with the t4 table:

    .

    The only difference is the tabname option now referring to the rt1, rt2, and rt3 tables. However, even if it works, this is not the best way to do it. This is because accessing a table via the MySQL API is done twice per table. Once by CONNECT to access the FEDERATED table on the local server, then a second time by FEDERATED engine to access the remote table.

    The CONNECT MYSQL table type being used anyway, you’d rather use it to directly access the remote tables. Indeed, the partition names can also be used to modify the connection URL’s. For instance, in the case shown above, the partition table can be created as:

    Several things can be noted here:

    1. As we have seen before, the partition engine currently loses the connection string. This is why it was specified as “connect” in the option list.

    2. For each partition sub-tables, the “%s” part of the connection string has been replaced by the partition name.

    3. It is not needed anymore to define the rt1, rt2, and rt3 tables (even it does not harm) and the FEDERATED engine is no more used to access the remote tables.

    This is a simple case where the connection string is almost the same for all the sub-tables. But what if the sub-tables are accessed by very different connection strings? For instance:

    There are two solutions. The first one is to use the parts of the connection string to differentiate as partition names:

    The second one, allowing avoiding too complicated partition names, is to create federated servers to access the remote tables (if they do not already exist, else just use them). For instance the first one could be:

    Similarly, “server_two” and “server_three” would be created and the final partition table would be created as:

    It would be even simpler if all remote tables had the same name on the remote databases, for instance if they all were named xt1, the connection string could be set as “server_%s/xt1” and the partition names would be just “one”, “two”, and “three”.

    Sharding on a Special Column

    The technique we have seen above with file partitioning is also available with table partitioning. Companies willing to use as one table data sharded on the company branch servers can, as we have seen, add to the table create definition a special column. For instance:

    This example assumes that federated servers had been created named “server_main”, “server_east” and “server_west” and that all remote tables are named “sales”. Note also that in this example, the column id is no more a key.

    Current Partition Limitations

    Because the partition engine was written before some other engines were added to MariaDB, the way it works is sometime incompatible with these engines, in particular with CONNECT.

    Update statement

    With the sample tables above, you can do update statements such as:

    It works perfectly and is accepted by CONNECT. However, let us consider the statement:

    This statement is not accepted by CONNECT. The reason is that the column id being part of the partition function, changing its value may require the modified row to be moved to another partition. The way it is done by the partition engine is to delete the old row and to re-insert the new modified one. However, this is done in a way that is not currently compatible with CONNECT (remember that CONNECT supports UPDATE in a specific way, in particular for the table type MYSQL) This limitation could be temporary. Meanwhile the workaround is to manually do what is done above,

    Deleting the row to modify and inserting the modified row:

    Alter Table statement

    For all CONNECT outward tables, the ALTER TABLE statement does not make any change in the table data. This is why ALTER TABLE should not be used; in particular to modify the partition definition, except of course to correct a wrong definition. Note that using ALTER TABLE to create a partition table in two steps because column options would be lost is valid as it applies to a table that is not yet partitioned.

    As we have seen, it is also safe to use it to create or drop indexes. Otherwise, a simple rule of thumb is to avoid altering a table definition and better drop and re-create a table whose definition must be modified. Just remember that for outward CONNECT tables, dropping a table does not erase the data and that creating it does not modify existing data.

    Rowid special column

    Each partition being handled separately as one table, the ROWID special column returns the rank of the row in its partition, not in the whole table. This means that for partition tables ROWID and ROWNUM are equivalent.

    This page is licensed: CC BY-SA / Gnu FDL

    MyISAM
    InnoDB

    For a full list of server options, system variables and status variables, .

    Data file name: table_name#P#partition_name.table_file_type
    Index file name: table_name#P#partition_name.index_file_type
    CREATE TABLE t1 (
    id INT KEY NOT NULL,
    msg VARCHAR(32))
    ENGINE=CONNECT TABLE_TYPE=FIX
    PARTITION BY RANGE(id) (
    PARTITION first VALUES LESS THAN(10),
    PARTITION middle VALUES LESS THAN(50),
    PARTITION last VALUES LESS THAN(MAXVALUE));
    | t1#P#first.fix
    | t1#P#first.fnx
    | t1#P#middle.fix
    | t1#P#middle.fnx
    | t1#P#last.fix
    | t1#P#last.fnx
    | table_name#P#partition_name#SP#subpartition_name.type
    | table_name#P#partition_name#SP#subpartition_name.index_type
    E:\Data\part1.txt
    E:\Data\part2.txt
    E:\Data\part3.txt
    CREATE TABLE t2 (
    id INT NOT NULL,
    msg VARCHAR(32),
    INDEX XID(id))
    ENGINE=connect table_type=FIX file_name='E:/Data/part%s.txt'
    PARTITION BY RANGE(id) (
    PARTITION `1` VALUES LESS THAN(10),
    PARTITION `2` VALUES LESS THAN(50),
    PARTITION `3` VALUES LESS THAN(MAXVALUE));
    SELECT * FROM t2 WHERE id = 12;
    SELECT partition_name, table_rows FROM
    information_schema.partitions WHERE table_name = 't2';
    SELECT CASE WHEN id < 10 THEN 1 WHEN id < 50 THEN 2 ELSE 3 END
    AS pn, COUNT(*) FROM part3 GROUP BY pn;
    tmp/boston.txt
    tmp/chicago.txt
    tmp/atlanta.txt
    ID: int
    First_name: varchar(16)
    Last_name: varchar(30)
    Birth: date
    Hired: date
    Job: char(10)
    Salary: double(8,2)
    CREATE TABLE mulemp (
    id INT NOT NULL,
    first_name VARCHAR(16) NOT NULL,
    last_name VARCHAR(30) NOT NULL,
    birth DATE NOT NULL date_format='DD/MM/YYYY',
    hired DATE NOT NULL date_format='DD/MM/YYYY',
    job CHAR(10) NOT NULL,
    salary DOUBLE(8,2) NOT NULL
    ) ENGINE=CONNECT table_type=FIX file_name='tmp/*.txt' multiple=1;
    CREATE TABLE partemp (
    id INT NOT NULL,
    first_name VARCHAR(16) NOT NULL,
    last_name VARCHAR(30) NOT NULL,
    birth DATE NOT NULL date_format='DD/MM/YYYY',
    hired DATE NOT NULL date_format='DD/MM/YYYY',
    job CHAR(16) NOT NULL,
    salary DOUBLE(10,2) NOT NULL,
    city CHAR(12) DEFAULT 'boston' special=PARTID,
    INDEX XID(id)
    ) ENGINE=CONNECT table_type=FIX file_name='E:/Data/Test/%s.txt';
    ALTER TABLE partemp
    PARTITION BY LIST COLUMNS(city) (
    PARTITION `atlanta` VALUES IN('atlanta'),
    PARTITION `boston` VALUES IN('boston'),
    PARTITION `chicago` VALUES IN('chicago'));
    INSERT INTO partemp(id,first_name,last_name,birth,hired,job,salary) VALUES
    (1205,'Harry','Cover','1982-10-07','2010-09-21','MANAGEMENT',125000.00);
    INSERT INTO partemp VALUES
    (1524,'Jim','Beams','1985-06-18','2012-07-25','SALES',52000.00,'chicago'),
    (1431,'Johnny','Walker','1988-03-12','2012-08-09','RESEARCH',46521.87,'boston'),
    (1864,'Jack','Daniels','1991-12-01','2013-02-16','DEVELOPMENT',63540.50,'atlanta');
    SELECT city, first_name, job FROM partemp WHERE id IN (1524,1431);
    CREATE TABLE xt1 (
    id INT NOT NULL,
    msg VARCHAR(32))
    ENGINE=myisam;
    
    CREATE TABLE xt2 (
    id INT NOT NULL,
    msg VARCHAR(32)); /* engine=innoDB */
    
    CREATE TABLE xt3 (
    id INT NOT NULL,
    msg VARCHAR(32))
    ENGINE=connect table_type=CSV;
    CREATE TABLE t3 (
    id INT NOT NULL,
    msg VARCHAR(32))
    ENGINE=connect table_type=PROXY tabname='xt%s'
    PARTITION BY RANGE COLUMNS(id) (
    PARTITION `1` VALUES LESS THAN(10),
    PARTITION `2` VALUES LESS THAN(50),
    PARTITION `3` VALUES LESS THAN(MAXVALUE));
    INSERT INTO t3 VALUES
    (4, 'four'),(7,'seven'),(10,'ten'),(40,'forty'),
    (60,'sixty'),(81,'eighty one'),(72,'seventy two'),
    (11,'eleven'),(1,'one'),(35,'thirty five'),(8,'eight');
    SELECT partition_name, table_rows FROM
    information_schema.partitions WHERE table_name = 't3';
    EXPLAIN PARTITIONS SELECT * FROM t3 WHERE id = 81;
    CREATE TABLE t4 (
    id INT KEY NOT NULL,
    msg VARCHAR(32))
    ENGINE=connect table_type=MYSQL tabname='xt%s'
    PARTITION BY RANGE COLUMNS(id) (
    PARTITION `1` VALUES LESS THAN(10),
    PARTITION `2` VALUES LESS THAN(50),
    PARTITION `3` VALUES LESS THAN(MAXVALUE));
    SELECT * FROM t4 WHERE id = 7;
    SELECT `id`, `msg` FROM `xt1` WHERE `id` = 7
    CREATE TABLE rt1 (id INT KEY NOT NULL, msg VARCHAR(32))
    ENGINE=federated connection='mysql://root@host1/test/sales';
    
    CREATE TABLE rt2 (id INT KEY NOT NULL, msg VARCHAR(32))
    ENGINE=federated connection='mysql://root@host2/test/sales';
    
    CREATE TABLE rt3 (id INT KEY NOT NULL, msg VARCHAR(32))
    ENGINE=federated connection='mysql://root@host3/test/sales';
    CREATE TABLE t5 (
    id INT KEY NOT NULL,
    msg VARCHAR(32))
    ENGINE=connect table_type=MYSQL tabname='rt%s'
    PARTITION BY RANGE COLUMNS(id) (
    PARTITION `1` VALUES LESS THAN(10),
    PARTITION `2` VALUES LESS THAN(50),
    PARTITION `3` VALUES LESS THAN(MAXVALUE));
    CREATE TABLE t6 (
    id INT KEY NOT NULL,
    msg VARCHAR(32))
    ENGINE=connect table_type=MYSQL
    option_list='connect=mysql://root@host%s/test/sales'
    PARTITION BY RANGE COLUMNS(id) (
    PARTITION `1` VALUES LESS THAN(10),
    PARTITION `2` VALUES LESS THAN(50),
    PARTITION `3` VALUES LESS THAN(MAXVALUE));
    For rt1: connection='mysql://root:tinono@127.0.0.1:3307/test/xt1'
    For rt2: connection='mysql://foo:foopass@denver/dbemp/xt2'
    For rt3: connection='mysql://root@huston :5505/test/tabx'
    CREATE TABLE t7 (
    id INT KEY NOT NULL,
    msg VARCHAR(32))
    ENGINE=connect table_type=MYSQL
    option_list='connect=mysql://%s'
    PARTITION BY RANGE COLUMNS(id) (
    PARTITION `root:tinono@127.0.0.1:3307/test/xt1` VALUES LESS THAN(10),
    PARTITION `foo:foopass@denver/dbemp/xt2` VALUES LESS THAN(50),
    PARTITION `root@huston :5505/test/tabx` VALUES LESS THAN(MAXVALUE));
    CREATE SERVER `server_one` FOREIGN DATA WRAPPER 'mysql'
    OPTIONS
    (HOST '127.0.0.1',
    DATABASE 'test',
    USER 'root',
    PASSWORD 'tinono',
    PORT 3307);
    CREATE TABLE t8 (
    id INT KEY NOT NULL,
    msg VARCHAR(32))
    ENGINE=connect table_type=MYSQL
    option_list='connect=server_%s'
    PARTITION BY RANGE COLUMNS(id) (
    PARTITION `one/xt1` VALUES LESS THAN(10),
    PARTITION `two/xt2` VALUES LESS THAN(50),
    PARTITION `three/tabx` VALUES LESS THAN(MAXVALUE));
    CREATE TABLE t9 (
    id INT NOT NULL,
    msg VARCHAR(32),
    branch CHAR(16) DEFAULT 'main' special=PARTID,
    INDEX XID(id))
    ENGINE=connect table_type=MYSQL
    option_list='connect=server_%s/sales'
    PARTITION BY RANGE COLUMNS(id) (
    PARTITION `main` VALUES IN('main'),
    PARTITION `east` VALUES IN('east'),
    PARTITION `west` VALUES IN('west'));
    UPDATE t2 SET msg = 'quatre' WHERE id = 4;
    UPDATE t2 SET id = 41 WHERE msg = 'four';
    DELETE FROM t2 WHERE id = 4;
    INSERT INTO t2 VALUES(41, 'four');

    ALL

    22

    Using where

    mariadb-backup SST Method
    Manual SST of Galera Cluster Node With mariadb-backup
    Configuring MariaDB Replication between MariaDB Galera Cluster and MariaDB Server
    Configuring MariaDB Replication between Two MariaDB Galera Clusters
    wsrep_local_state_uuid
    wsrep_last_committed
    wsrep_local_state_uuid
    wsrep_last_committed
    see this page
    Galera
    mariadb-backup SST method
    mariadb-backup SST method

    Partitioning Overview

    Learn the fundamentals of table partitioning in MariaDB, including its benefits for performance, maintenance, and managing large datasets.

    In MariaDB, a table can be split in smaller subsets. Both data and indexes are partitioned.

    Uses for Partitioning

    There can be several reasons to use this feature:

    • If you often need to delete a large set of rows, such as all rows for a given year, using partitions can help, as dropping a partition with many rows is very fast, while deleting a lot of rows can be very slow.

    • Very large tables and indexes can be slow even with optimized queries. But if the target table is partitioned, queries that read a small number of partitions can be much faster. However, this means that the queries have to be written carefully in order to only access a given set of partitions.

    • Partitioning allows one to distribute files over multiple storage devices. For example, we can have historical data on slower, larger disks (historical data are not supposed to be frequently read); and current data can be on faster disks, or SSD devices.

    • In case we separate historical data from recent data, we will probably need to take regular backups of one partition, not the whole table.

    Partitioning Types

    When partitioning a table, the use should decide:

    • a partitioning type;

    • a partitioning expression.

    A partitioning type is the method used by MariaDB to decide how rows are distributed over existing partitions. Choosing the proper partitioning type is important to distribute rows over partitions in an efficient way.

    With some partitioning types, a partitioning expression is also required. A partitioning function is an SQL expression returning an integer or temporal value, used to determine which partition will contain a given row. The partitioning expression is used for all reads and writes on involving the partitioned table, thus it should be fast.

    MariaDB supports the following partitioning types:

    Enabling Partitioning

    By default, MariaDB permits partitioning. You can determine this by using the statement, for example:

    If partition is listed as DISABLED:

    MariaDB has either been built without partitioning support, or has been started with the option, or one of its variants:

    and you will not be able to create partitions.

    Using Partitions

    It is possible to create a new partitioned table using .

    allows one to:

    • Partition an existing table;

    • Remove partitions from a partitioned table (with all data in the partition);

    • Add/remove partitions, or reorganize them, as long as the partitioning function allows these operations (see below);

    • Exchange a partition with a table;

    Adding Partitions

    [ALTER TABLE](../../reference/sql-statements-and-structure/sql-statements/data-definition/alter/alter-table.md) ... ADD PARTITION can be used to add partitions to an existing table:

    With partitions, it is only possible to add a partition to the high end of the range, not the low end. For example, the following results in an error:

    You can work around this by using REORGANIZE PARTITION to split the partition instead. See .

    Coalescing Partitions

    is used to reduce the number of HASH or KEY partitions by the specified number. For example, given the following table with 5 partitions:

    The following statement reduces the number of partitions by 2, leaving the table with 3 partitions:

    Converting Partitions to/from Tables

    This feature is available from MariaDB 10.7.

    can be used to convert partitions in an existing table to a standalone table:

    CONVERT TABLE does the reverse, converting a table into a partition:

    CONVERT TABLE ... WITH / WITHOUT VALIDATION

    This feature is available from MariaDB 11.4.

    When converting tables to a partition, validation is performed on each row to ensure it meets the partition requirements. This can be very slow in the case of larger tables. It is possible to disable this validation by specifying the WITHOUT VALIDATION option.

    WITH VALIDATION will result in the validation being performed, and is the default behaviour.

    An alternative to convert partitions to tables is to use . This requires having to manually do the following steps:

    1. Create an empty table with the same structure as the partition.

    2. Exchange the table with the partition.

    3. Drop the empty partition.

    For example:

    Similarly, to do the reverse and convert a table into a partition [ALTER TABLE](../../reference/sql-statements-and-structure/sql-statements/data-definition/alter/alter-table.md) ... EXCHANGE PARTITION can also be used, with the following manual steps required:

    • create the partition

    • exchange the partition with the table

    • drop the old table:

    For example:

    Dropping Partitions

    can be used to drop specific partitions (and discard all data within the specified partitions) for and partitions. It cannot be used on or partitions. To rather remove all partitioning, while leaving the data unaffected, see .

    Exchanging Partitions

    ALTER TABLE t1 EXCHANGE PARTITION p1 WITH TABLE t2 allows to exchange a partition or subpartition with another table.

    The following requirements must be met:

    • Table t1 must be partitioned, and table t2 cannot be partitioned.

    • Table t2 cannot be a temporary table.

    • Table t1 and t2 must otherwise be identical.

    • Any existing row in t2 must match the conditions for storage in the exchanged partition p1 unless, from , the WITHOUT VALIDATION option is specified.

    By default, MariaDB performs the validation to see that each row meets the partition requirements, and the statement fails if a row does not fit.

    This attempted exchange fails, as the value is already in t2, and 2015-05-05 is outside of the partition conditions:

    WITH / WITHOUT VALIDATION

    This feature is available from MariaDB 11.4.

    This validation is performed for each row, and can be very slow in the case of larger tables. It is possible to disable this validation by specifying the WITHOUT VALIDATION option:

    WITH VALIDATION results in the validation being performed, and is the default behavior.

    Removing Partitioning

    removes all partitioning from the table, while leaving the data unaffected. To rather drop a particular partition (and discard all of its data), see .

    Reorganizing Partitions

    Reorganizing partitions allows one to adjust existing partitions, without losing data. Specifically, the statement can be used for:

    • Splitting an existing partition into multiple partitions.

    • Merging a number of existing partitions into a new, single, partition.

    • Changing the ranges for a subset of existing partitions defined using VALUES LESS THAN.

    • Changing the value lists for a subset of partitions defined using VALUES I

    Splitting Partitions

    An existing partition can be split into multiple partitions. This can also be used to add a new partition at the low end of a partition (which is not possible by ).

    Similarly, if MAXVALUE binds the high end:

    Merging Partitions

    A number of existing partitions can be merged into a new partition, for example:

    Changing Ranges

    Renaming Partitions

    The statement can also be used for renaming partitions. Note that this creates a copy of the partition:

    Truncating Partitions

    [ALTER TABLE](../../reference/sql-statements-and-structure/sql-statements/data-definition/alter/alter-table.md) ... TRUNCATE PARTITION

    removes all data from the specified partition/s, leaving the table and partition structure unchanged. Partitions don't need to be contiguous:

    Analyzing Partitions

    Similar to , key distributions for specific partitions can also be analyzed and stored, for example:

    Checking Partitions

    Similar to , specific partitions can be checked for errors, for example:

    The ALL keyword can be used in place of the list of partition names, and the check operation are performed on all partitions.

    Repairing Partitions

    Similar to , specific partitions can be repaired:

    As with , the QUICK and EXTENDED options are available. However, the USE_FRM option cannot be used with this statement on a partitioned table.

    REPAIR PARTITION fails if there are duplicate key errors. ALTER IGNORE TABLE ... REPAIR PARTITION can be used in this case.

    The ALL keyword can be used in place of the list of partition names, and the repair operation are performed on all partitions.

    Optimizing Partitions

    Similar to , specific partitions can be checked for errors:

    OPTIMIZE PARTITION does not support per-partition optimization on InnoDB tables, and will issue a warning and cause the entire table to rebuilt and analyzed. ALTER TABLE ... REBUILD PARTITION and ALTER TABLE ... ANALYZE PARTITION can be used instead.

    The ALL keyword can be used in place of the list of partition names, and the optimize operation are performed on all partitions.

    Partitioning for Specific Storage Engines

    Some MariaDB allow more interesting uses for partitioning.

    The storage engine allows one to:

    • Treat a set of identical defined tables as one.

    • A MyISAM table can be in many different MERGE sets and also used separately.

    allows one to:

    • Move partitions of the same table on different servers. In this way, the workload can be distributed on more physical or virtual machines (data sharding).

    • All partitions of a SPIDER table can also live on the same machine. In this case there are a small overhead (SPIDER uses connections to localhost), but queries that read multiple partitions will use parallel threads.

    allows one to:

    • Build a table whose partitions are tables using different storage engines (like InnoDB, MyISAM, or even engines that do not support partitioning).

    • Build an indexable, writeable table on several data files. These files can be in different formats.

    See also:

    See Also

    • contains information about existing partitions.

    • for suggestions on using partitions

    This page is licensed: CC BY-SA / Gnu FDL

    LINEAR HASH
  • KEY

  • LINEAR KEY

  • SYSTEM_TIME

  • Perform administrative operations on some or all partitions (analyze, optimize, check, repair).

    .
  • Renaming partitions.

  • RANGE
    LIST
    RANGE COLUMNS and LIST COLUMNS
    HASH
    SHOW PLUGINS
    --skip-partition
    CREATE TABLE
    ALTER TABLE
    RANGE
    Splitting Partitions
    ALTER TABLE
    ALTER TABLE
    ALTER TABLE EXCHANGE PARTITION
    ALTER TABLE DROP PARTITION
    RANGE
    LIST
    HASH
    KEY
    Removing Partitioning
    MariaDB 11.4
    ALTER TABLE REMOVE PARTITIONING
    Dropping Partitions
    RANGE
    Adding Partitions
    ALTER TABLE REORGANIZE PARTITION
    ALTER TABLE TRUNCATE PARTITION
    ANALYZE TABLE
    CHECK TABLE
    REPAIR TABLE
    REPAIR TABLE
    OPTIMIZE TABLE
    storage engines
    MERGE
    MyISAM
    SPIDER
    CONNECT
    Using CONNECT - Partitioning and Sharding
    ALTER TABLE
    INFORMATION_SCHEMA.PARTITIONS
    Partition Maintenance
    SHOW PLUGINS;
    ...
    | Aria                          | ACTIVE   | STORAGE ENGINE     | NULL    | GPL     |
    | FEEDBACK                      | DISABLED | INFORMATION SCHEMA | NULL    | GPL     |
    | partition                     | ACTIVE   | STORAGE ENGINE     | NULL    | GPL     |
    +-------------------------------+----------+--------------------+---------+---------+
    | partition                     | DISABLED | STORAGE ENGINE     | NULL    | GPL     |
    +-------------------------------+----------+--------------------+---------+---------+
    --skip-partition
    --disable-partition
    --partition=OFF
    ADD PARTITION [IF NOT EXISTS] (partition_definition)
    CREATE OR REPLACE TABLE t1 (
      dt DATETIME NOT NULL
    )
      ENGINE = InnoDB
      PARTITION BY RANGE (YEAR(dt))
      (
      PARTITION p0 VALUES LESS THAN (2013),
      PARTITION p1 VALUES LESS THAN (2014),
      PARTITION p2 VALUES LESS THAN (2015),
      PARTITION p3 VALUES LESS THAN (2016)
    );
    
    ALTER TABLE t1 ADD PARTITION (
      PARTITION p4 VALUES LESS THAN (2017), 
      PARTITION p5 VALUES LESS THAN (2018)
    );
    ALTER TABLE t1 ADD PARTITION (
      PARTITION p0a VALUES LESS THAN (2012)
    );
    ERROR 1493 (HY000): VALUES LESS THAN value must be strictly increasing for each partition
    COALESCE PARTITION number
    CREATE OR REPLACE TABLE t1 (v1 INT)
      PARTITION BY KEY (v1)
      PARTITIONS 5;
    ALTER TABLE t1 COALESCE PARTITION 2;
    CONVERT PARTITION partition_name TO TABLE tbl_name
    CONVERT TABLE normal_table TO partition_definition
    CREATE OR REPLACE TABLE t1 (
       dt DATETIME NOT NULL
     )
       ENGINE = InnoDB
       PARTITION BY RANGE (YEAR(dt))
       (
       PARTITION p0 VALUES LESS THAN (2013),
       PARTITION p1 VALUES LESS THAN (2014),
       PARTITION p2 VALUES LESS THAN (2015),
       PARTITION p3 VALUES LESS THAN (2016)
     );
    
    INSERT INTO t1 VALUES ('2013-11-11'),('2014-11-11'),('2015-11-11');
    
    SELECT * FROM t1;
    +--------------+
    | dt           |
    +--------------+
    | 2013-11-11 00:00:00 |
    | 2014-11-11 00:00:00 |
    | 2015-11-11 00:00:00 |
    +---------------------+
    
    ALTER TABLE t1 CONVERT PARTITION p3 TO TABLE t2;
    
    SELECT * FROM t1;
    +--------------+
    | dt           |
    +--------------+
    | 2013-11-11 00:00:00 |
    | 2014-11-11 00:00:00 |
    +---------------------+
    
    SELECT * FROM t2;
    +--------------+
    | dt           |
    +--------------+
    | 2015-11-11 00:00:00 |
    +---------------------+
    
    SHOW CREATE TABLE t1\G
    *************************** 1. row ***************************
           TABLE: t1
    CREATE TABLE: CREATE TABLE `t1` (
      `dt` datetime NOT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_swedish_ci
     PARTITION BY RANGE (year(`dt`))
    (PARTITION `p0` VALUES LESS THAN (2013) ENGINE = InnoDB,
     PARTITION `p1` VALUES LESS THAN (2014) ENGINE = InnoDB,
     PARTITION `p2` VALUES LESS THAN (2015) ENGINE = InnoDB)
    
    SHOW CREATE TABLE t2\G
    *************************** 1. row ***************************
           TABLE: t2
    CREATE TABLE: CREATE TABLE `t2` (
      `dt` datetime NOT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_swedish_ci
    ALTER TABLE t1 CONVERT TABLE t2 TO PARTITION p3 VALUES LESS THAN (2016);
    
    SELECT * FROM t1;
    +--------------+
    | dt           |
    +--------------+
    | 2013-11-11 00:00:00 |
    | 2014-11-11 00:00:00 |
    | 2015-11-11 00:00:00 |
    +---------------------+
    3 rows in set (0.001 sec)
    
    SELECT * FROM t2;
    ERROR 1146 (42S02): Table 'test.t2' doesn't exist
    
    SHOW CREATE TABLE t1\G
    *************************** 1. row ***************************
           TABLE: t1
    CREATE TABLE: CREATE TABLE `t1` (
      `dt` datetime NOT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_swedish_ci
     PARTITION BY RANGE (year(`dt`))
    (PARTITION `p0` VALUES LESS THAN (2013) ENGINE = InnoDB,
     PARTITION `p1` VALUES LESS THAN (2014) ENGINE = InnoDB,
     PARTITION `p2` VALUES LESS THAN (2015) ENGINE = InnoDB,
     PARTITION `p3` VALUES LESS THAN (2016) ENGINE = InnoDB)
    CONVERT TABLE normal_table TO partition_definition [{WITH | WITHOUT} VALIDATION]
    CREATE OR REPLACE TABLE t1 (
       dt DATETIME NOT NULL
     )
       ENGINE = InnoDB
       PARTITION BY RANGE (YEAR(dt))
       (
       PARTITION p0 VALUES LESS THAN (2013),
       PARTITION p1 VALUES LESS THAN (2014),
       PARTITION p2 VALUES LESS THAN (2015),
       PARTITION p3 VALUES LESS THAN (2016)
     );
    
    INSERT INTO t1 VALUES ('2013-11-11'),('2014-11-11'),('2015-11-11');
    
    SELECT * FROM t1;
    +--------------+
    | dt           |
    +--------------+
    | 2013-11-11 00:00:00 |
    | 2014-11-11 00:00:00 |
    | 2015-11-11 00:00:00 |
    +---------------------+
    
    CREATE OR REPLACE TABLE t2 LIKE t1;
    
    ALTER TABLE t2 REMOVE PARTITIONING;
    
    ALTER TABLE t1 EXCHANGE PARTITION p3 WITH TABLE t2;
    
    ALTER TABLE t1 DROP PARTITION p3;
    
    SELECT * FROM t1;
    +--------------+
    | dt           |
    +--------------+
    | 2013-11-11 00:00:00 |
    | 2014-11-11 00:00:00 |
    +---------------------+
    
    SELECT * FROM t2;
    +--------------+
    | dt           |
    +--------------+
    | 2015-11-11 00:00:00 |
    +---------------------+
    
    SHOW CREATE TABLE t1\G
    *************************** 1. row ***************************
           TABLE: t1
    CREATE TABLE: CREATE TABLE `t1` (
      `dt` datetime NOT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_swedish_ci
     PARTITION BY RANGE (year(`dt`))
    (PARTITION `p0` VALUES LESS THAN (2013) ENGINE = InnoDB,
     PARTITION `p1` VALUES LESS THAN (2014) ENGINE = InnoDB,
     PARTITION `p2` VALUES LESS THAN (2015) ENGINE = InnoDB)
    
    SHOW CREATE TABLE t2\G
    *************************** 1. row ***************************
           TABLE: t2
    CREATE TABLE: CREATE TABLE `t2` (
      `dt` datetime NOT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_swedish_ci
    ALTER TABLE t1 ADD PARTITION (PARTITION p3 VALUES LESS THAN (2016));
    
    ALTER TABLE t1 EXCHANGE PARTITION p3 WITH TABLE t2;
    
    DROP TABLE t2;
    
    SELECT * FROM t1;
    +--------------+
    | dt           |
    +--------------+
    | 2013-11-11 00:00:00 |
    | 2014-11-11 00:00:00 |
    | 2015-11-11 00:00:00 |
    +---------------------+
    
    SHOW CREATE TABLE t1\G
    *************************** 1. row ***************************
           TABLE: t1
    CREATE TABLE: CREATE TABLE `t1` (
      `dt` datetime NOT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_swedish_ci
     PARTITION BY RANGE (year(`dt`))
    (PARTITION `p0` VALUES LESS THAN (2013) ENGINE = InnoDB,
     PARTITION `p1` VALUES LESS THAN (2014) ENGINE = InnoDB,
     PARTITION `p2` VALUES LESS THAN (2015) ENGINE = InnoDB,
     PARTITION `p3` VALUES LESS THAN (2016) ENGINE = InnoDB)
    DROP PARTITION [IF EXISTS] partition_names
    CREATE OR REPLACE TABLE t1 (
      dt DATETIME NOT NULL
    )
      ENGINE = InnoDB
      PARTITION BY RANGE (YEAR(dt))
      (
      PARTITION p0 VALUES LESS THAN (2013),
      PARTITION p1 VALUES LESS THAN (2014),
      PARTITION p2 VALUES LESS THAN (2015),
      PARTITION p3 VALUES LESS THAN (2016)
    );
    
    INSERT INTO t1 VALUES ('2012-11-15');
    SELECT * FROM t1;
    +--------------+
    | dt           |
    +--------------+
    | 2012-11-15 00:00:00 |
    +---------------------+
    
    ALTER TABLE t1 DROP PARTITION p0;
    
    SELECT * FROM t1;
    Empty set (0.002 sec)
    EXCHANGE PARTITION partition_name WITH TABLE tbl_name [{WITH | WITHOUT} VALIDATION]
    EXCHANGE PARTITION partition_name WITH TABLE tbl_name
    CREATE OR REPLACE TABLE t1 (
      dt DATETIME NOT NULL
    )
      ENGINE = InnoDB
      PARTITION BY RANGE (YEAR(dt))
      (
      PARTITION p0 VALUES LESS THAN (2013),
      PARTITION p1 VALUES LESS THAN (2014)
    );
    
    CREATE OR REPLACE TABLE t2 (
      dt DATETIME NOT NULL
    ) ENGINE = InnoDB;
    
    INSERT INTO t1 VALUES ('2012-01-01'),('2013-01-01');
    
    INSERT INTO t2 VALUES ('2013-02-02');
    
    SELECT * FROM t1;
    +--------------+
    | dt           |
    +--------------+
    | 2012-01-01 00:00:00 |
    | 2013-01-01 00:00:00 |
    +---------------------+
    
    SELECT * FROM t2;
    +--------------+
    | dt           |
    +--------------+
    | 2013-02-02 00:00:00 |
    +---------------------+
    
    ALTER TABLE t1 EXCHANGE PARTITION p1 WITH TABLE t2;
    
    SELECT * FROM t1;
    +--------------+
    | dt           |
    +--------------+
    | 2012-01-01 00:00:00 |
    | 2013-02-02 00:00:00 |
    +---------------------+
    
    SELECT * FROM t2;
    +--------------+
    | dt           |
    +--------------+
    | 2013-01-01 00:00:00 |
    +---------------------+
    CREATE OR REPLACE TABLE t1 (
      dt DATETIME NOT NULL
    )
      ENGINE = InnoDB
      PARTITION BY RANGE (YEAR(dt))
      (
      PARTITION p0 VALUES LESS THAN (2013),
      PARTITION p1 VALUES LESS THAN (2014)
    );
    
    CREATE OR REPLACE TABLE t2 (
      dt DATETIME NOT NULL
    ) ENGINE = InnoDB;
    
    INSERT INTO t1 VALUES ('2012-02-02'),('2013-03-03');
    
    INSERT INTO t2 VALUES ('2015-05-05');
    
    ALTER TABLE t1 EXCHANGE PARTITION p1 WITH TABLE t2;
    ERROR 1526 (HY000): Table has no partition for value 0
    ALTER TABLE t1 EXCHANGE PARTITION p1 WITH TABLE t2 WITHOUT VALIDATION;
    Query OK, 0 rows affected (0.048 sec)
    REMOVE PARTITIONING
    ALTER TABLE t1 REMOVE PARTITIONING;
    REORGANIZE PARTITION [partition_names INTO (partition_definitions)]
    CREATE OR REPLACE TABLE t1 (
      dt DATETIME NOT NULL
    )
      ENGINE = InnoDB
      PARTITION BY RANGE (YEAR(dt))
      (
      PARTITION p0 VALUES LESS THAN (2013),
      PARTITION p1 VALUES LESS THAN (2014),
      PARTITION p2 VALUES LESS THAN (2015),
      PARTITION p3 VALUES LESS THAN (2016)
    );
    
    ALTER TABLE t1 REORGANIZE PARTITION p0 INTO (
        PARTITION p0a VALUES LESS THAN (2012),
        PARTITION p0b VALUES LESS THAN (2013)
    );
    CREATE OR REPLACE TABLE t1 (
      dt DATETIME NOT NULL
    )
      ENGINE = InnoDB
      PARTITION BY RANGE (YEAR(dt))
      (
      PARTITION p0 VALUES LESS THAN (2013),
      PARTITION p1 VALUES LESS THAN (2014),
      PARTITION p2 VALUES LESS THAN (2015),
      PARTITION p3 VALUES LESS THAN (2016),
      PARTITION p4 VALUES LESS THAN MAXVALUE
    );
    
    ALTER TABLE t1 REORGANIZE PARTITION p4 INTO (
        PARTITION p4 VALUES LESS THAN (2017),
        PARTITION p5 VALUES LESS THAN MAXVALUE
    );
    CREATE OR REPLACE TABLE t1 (
      dt DATETIME NOT NULL
    )
      ENGINE = InnoDB
      PARTITION BY RANGE (YEAR(dt))
      (
      PARTITION p0 VALUES LESS THAN (2013),
      PARTITION p1 VALUES LESS THAN (2014),
      PARTITION p2 VALUES LESS THAN (2015),
      PARTITION p3 VALUES LESS THAN (2016)
    );
    
    ALTER TABLE t1 REORGANIZE PARTITION p2,p3 INTO (
        PARTITION p2 VALUES LESS THAN (2016)
    );
    CREATE OR REPLACE TABLE t1 (
      dt DATETIME NOT NULL
    )
      ENGINE = InnoDB
      PARTITION BY RANGE (YEAR(dt))
      (
      PARTITION p0 VALUES LESS THAN (2013),
      PARTITION p1 VALUES LESS THAN (2014),
      PARTITION p2 VALUES LESS THAN (2015),
      PARTITION p3 VALUES LESS THAN (2016)
    );
    
    ALTER TABLE t1 REORGANIZE PARTITION p3 INTO (
      PARTITION p3 VALUES LESS THAN (2017)
    );
    CREATE OR REPLACE TABLE t1 (
      dt DATETIME NOT NULL
    )
      ENGINE = InnoDB
      PARTITION BY RANGE (YEAR(dt))
      (
      PARTITION p0 VALUES LESS THAN (2013),
      PARTITION p1 VALUES LESS THAN (2014),
      PARTITION p2 VALUES LESS THAN (2015),
      PARTITION p3 VALUES LESS THAN (2016)
    );
    
    ALTER TABLE t1 REORGANIZE PARTITION p3 INTO (
      PARTITION p3_new VALUES LESS THAN (2016)
    );
    TRUNCATE PARTITION partition_names
    CREATE OR REPLACE TABLE t1 (
      dt DATETIME NOT NULL
    )
      ENGINE = InnoDB
      PARTITION BY RANGE (YEAR(dt))
      (
      PARTITION p0 VALUES LESS THAN (2013),
      PARTITION p1 VALUES LESS THAN (2014),
      PARTITION p2 VALUES LESS THAN (2015),
      PARTITION p3 VALUES LESS THAN (2016)
    );
    
    INSERT INTO t1 VALUES ('2012-11-01'),('2013-11-02'),('2014-11-03'),('2015-11-04');
    
    SELECT * FROM t1;
    +--------------+
    | dt           |
    +--------------+
    | 2012-11-01 00:00:00 |
    | 2013-11-02 00:00:00 |
    | 2014-11-03 00:00:00 |
    | 2015-11-04 00:00:00 |
    +---------------------+
    
    ALTER TABLE t1 TRUNCATE PARTITION p0,p2;
    
    SELECT * FROM t1;
    +--------------+
    | dt           |
    +--------------+
    | 2013-11-02 00:00:00 |
    | 2015-11-04 00:00:00 |
    +---------------------+
    ALTER TABLE t1 ANALYZE PARTITION p0,p1,p3;
    +---------+---------+----------+----------+
    | Table   | Op      | Msg_type | Msg_text |
    +---------+---------+----------+----------+
    | test.t1 | analyze | status   | OK       |
    +---------+---------+----------+----------+
    CHECK PARTITION {ALL | PARTITION [,partition2 ...]}
    ALTER TABLE t1 CHECK PARTITION p1,p3;
    +---------+-------+----------+----------+
    | Table   | Op    | Msg_type | Msg_text |
    +---------+-------+----------+----------+
    | test.t1 | check | status   | OK       |
    +---------+-------+----------+----------+
    REPAIR PARTITION {ALL | partition [,partition2 ...]} [QUICK] [EXTENDED]
    ALTER TABLE t1 REPAIR PARTITION p0,p3;
    +---------+--------+----------+----------+
    | Table   | Op     | Msg_type | Msg_text |
    +---------+--------+----------+----------+
    | test.t1 | repair | status   | OK       |
    +---------+--------+----------+----------+
    OPTIMIZE PARTITION {ALL | PARTITION [,partition2 ...]}
    ALTER TABLE t1 OPTIMIZE PARTITION p0,p3;
    +---------+----------+----------+----------+
    | Table   | Op       | Msg_type | Msg_text |
    +---------+----------+----------+----------+
    | test.t1 | optimize | status   | OK       |
    +---------+----------+----------+----------+
    Why is the Software Called MariaDB?
    2016 Google Summer of Code
    normalize
    report the bug
    Reporting Bugs
    MariaDB ColumnStore Tools

    mariadb-backup Options

    A comprehensive reference for all command-line options available in mariadb-backup, covering backup, prepare, and restore operations.

    mariadb-backup was previously called mariabackup.

    List of mariadb-backup Options

    --apply-log

    Prepares an existing backup to restore to the MariaDB Server. This is only valid in innobackupex mode, which can be enabled with the option.

    Files that mariadb-backup generates during operations in the target directory are not ready for use on the Server. Before you can restore the data to MariaDB, you first need to prepare the backup.

    In the case of full backups, the files are not point in time consistent, since they were taken at different times. If you try to restore the database without first preparing the data, InnoDB rejects the new data as corrupt. Running mariadb-backup with the command readies the data so you can restore it to MariaDB Server. When working with incremental backups, you need to use the --prepare command and the option to update the base backup with the deltas from an incremental backup.

    Once the backup is ready, you can use the or the commands to restore the backup to the server.

    --apply-log-only

    If this option is used when preparing a backup, then only the redo log apply stage are performed, and other stages of crash recovery are ignored. This option is used with incremental backups.

    Note: This option is not needed or supported anymore.

    --backup

    Backs up your databases.

    Using this command option, mariadb-backup performs a backup operation on your database or databases. The backups are written to the target directory, as set by the option.

    mariadb-backup can perform full and incremental backups. A full backup creates a snapshot of the database in the target directory. An incremental backup checks the database against a previously taken full backup, (defined by the option) and creates delta files for these changes.

    In order to restore from a backup, you first need to run mariadb-backup with the --prepare option, to make a full backup point-in-time consistent or to apply incremental backup deltas to base. Then you can run mariadb-backup again with either the or commands to restore the database.

    For more information, see and .

    --binlog-info

    Defines how mariadb-backup retrieves the binary log coordinates from the server.

    The --binlog-info option supports the following retrieval methods. When no retrieval method is provided, it defaults to AUTO.

    Option
    Description

    Using this option, you can control how mariadb-backup retrieves the server's binary log coordinates corresponding to the backup.

    When enabled, whether using ON or AUTO, mariadb-backup retrieves information from the binlog during the backup process. When disabled with OFF, mariadb-backup runs without attempting to retrieve binary log information. You may find this useful when you need to copy data without metadata like the binlog or replication coordinates.

    Currently, the LOCKLESS option depends on features unsupported by MariaDB Server. See the description of the file for more information. If you attempt to run mariadb-backup with this option, then it causes the utility to exit with an error.

    --close-files

    Defines whether you want to close file handles.

    Using this option, you can tell mariadb-backup that you want to close file handles. Without this option, mariadb-backup keeps files open in order to manage DDL operations. When working with particularly large tablespaces, closing the file can make the backup more manageable. However, it can also lead to inconsistent backups. Use at your own risk.

    --compress

    This option was deprecated as it relies on the no longer maintained library. It are removed in a future release - versions supporting this function will not be affected. It is recommended to instead backup to a stream (stdout), and use a 3rd party compression library to compress the stream, as described in .

    Defines the compression algorithm for backup files.

    The --compress option only supports the now deprecated quicklz algorithm.

    Option
    Description

    If a backup is compressed using this option, then mariadb-backup will record that detail in the file.

    --compress-chunk-size

    Deprecated, for details see the option.

    Defines the working buffer size for compression threads.

    mariadb-backup can perform compression operations on the backup files before writing them to disk. It can also use multiple threads for parallel data compression during this process. Using this option, you can set the chunk size each thread uses during compression. It defaults to 64K.

    To further configure backup compression, see the and options.

    --compress-threads

    Deprecated, for details see the option.

    Defines the number of threads to use in compression.

    mariadb-backup can perform compression operations on the backup files before writing them to disk. Using this option, you can define the number of threads you want to use for this operation. You may find this useful in speeding up the compression of particularly large databases. It defaults to single-threaded.

    To further configure backup compression, see the and options.

    --copy-back

    Restores the backup to the data directory.

    Using this command, mariadb-backup copies the backup from the target directory to the data directory, as defined by the --datadir option. You must stop the MariaDB Server before running this command. The data directory must be empty. If you want to overwrite the data directory with the backup, use the --force-non-empty-directories option.

    Bear in mind, before you can restore a backup, you first need to run mariadb-backup with the --prepare option. In the case of full backups, this makes the files point-in-time consistent. With incremental backups, this applies the deltas to the base backup. Once the backup is prepared, you can run --copy-back to apply it to MariaDB Server.

    Running the --copy-back command copies the backup files to the data directory. Use this command if you want to save the backup for later. If you don't want to save the backup for later, use the --move-back option.

    --core-file

    Defines whether to write a core file.

    Using this option, you can configure mariadb-backup to dump its core to file in the event that it encounters fatal signals. You may find this useful for review and debugging purposes.

    --databases

    Defines the databases and tables you want to back up.

    Using this option, you can define the specific database or databases you want to back up. In cases where you have a particularly large database or otherwise only want to back up a portion of it, you can optionally also define the tables on the database.

    In cases where you want to back up most databases on a server or tables on a database, but not all, you can set the specific databases or tables you don't want to back up using the --databases-exclude option.

    If a backup is a partial backup, then mariadb-backup will record that detail in the xtrabackup_info file.

    In innobackupex mode, which can be enabled with the --innobackupex option, the --databases option can be used as described above, or it can be used to refer to a file, just as the can in the normal mode.

    --databases-exclude

    Defines the databases you don't want to back up.

    Using this option, you can define the specific database or databases you want to exclude from the backup process. You may find it useful when you want to back up most databases on the server or tables on a database, but would like to exclude a few from the process.

    To include databases in the backup, see the --databases option option.

    If a backup is a partial backup, then mariadb-backup records that detail in the xtrabackup_info file.

    --databases-file

    Defines the path to a file listing databases and/or tables you want to back up.

    Format the databases file to list one element per line, with the following syntax:

    In cases where you need to back up a number of databases or specific tables in a database, you may find the syntax for the --databases and --databases-exclude options a little cumbersome. Using this option you can set the path to a file listing the databases or databases and tables you want to back up.

    For instance, listing the databases and tables for a backup in a file called main-backup:

    If a backup is a partial backup, mariadb-backup records that detail in the xtrabackup_info file.

    -h, --datadir

    Defines the path to the database root.

    Using this option, you can define the path to the source directory. This is the directory that mariadb-backup reads for the data it backs up. It should be the same as the MariaDB Server datadir system variable.

    --debug-sleep-before-unlock

    This is a debug-only option used by the Xtrabackup test suite.

    --decompress

    Deprecated, for details see the --compress option.

    This option requires that you have the qpress utility installed on your system.

    Defines whether you want to decompress previously compressed backup files.

    When you run mariadb-backup with the --compress option, it compresses the subsequent backup files, using the QuickLZ algorithm. Using this option, mariadb-backup decompresses the compressed files from a previous backup.

    For instance, run a backup with compression:

    Then, decompress the backup:

    You can enable the decryption of multiple files at a time using the --parallel option. By default, mariadb-backup does not remove the compressed files from the target directory. To delete these files, use the --remove-original option.

    --debug-sync

    Defines the debug sync point. This option is only used by the mariadb-backup test suite.

    --defaults-extra-file

    Defines the path to an extra default option file.

    Using this option, you can define an extra default option file for mariadb-backup. Unlike --defaults-file, this file is read after the default option files are read, allowing you to only overwrite the existing defaults.

    --defaults-file

    Defines the path to the default option file.

    Using this option, you can define a default option file for mariadb-backup. Unlike the --defaults-extra-file option, when this option is provided, it completely replaces all default option files.

    --defaults-group

    Defines the option group to read in the option file.

    In situations where you find yourself using certain mariadb-backup options consistently every time you call it, you can set the options in an option file. The --defaults-group option defines what option group mariadb-backup reads for its options.

    Options you define from the command-line can be set in the configuration file using minor formatting changes. For instance, if you find yourself perform compression operations frequently, you might set --compress-threads and --compress-chunk-size options in this way:

    Now whenever you run a backup with the --compress option, it always performs the compression using 12 threads and 64K chunks.

    See and for a list of the option groups read by mariadb-backup by default.

    --encrypted-backup

    When this option is used with --backup, if mariadb-backup encounters a page that has a non-zero key_version value, then mariadb-backup assumes that the page is encrypted.

    Use --skip-encrypted-backup instead to allow mariadb-backup to copy unencrypted tables that were originally created before MySQL 5.1.48.

    --export

    If this option is provided during the --prepare stage, then it tells mariadb-backup to create .cfg files for each InnoDB file-per-table tablespace. These .cfg files are used to import transportable tablespaces in the process of restoring partial backups and restoring individual tables and partitions.

    The --export option could require rolling back incomplete transactions that had modified the table. This will likely create a "new branch of history" that does not correspond to the server that had been backed up, which makes it impossible to apply another incremental backup on top of such additional changes. The option should only be applied when doing a --prepare of the last incremental.

    mariadb-backup did not support the --export option. See about that. In earlier versions of MariaDB, this means that mariadb-backup could not create .cfg files for InnoDB file-per-table tablespaces during the --prepare stage. You can still import file-per-table tablespaces without the .cfg files in many cases, so it may still be possible in those versions to restore partial backups or to restore individual tables and partitions with just the .ibd files. If you have a full backup and you need to create .cfg files for InnoDB file-per-table tablespaces, then you can do so by preparing the backup as usual without the --export option, and then restoring the backup, and then starting the server. At that point, you can use the server's built-in features to copy the transportable tablespaces.

    --extra-lsndir

    Saves an extra copy of the xtrabackup_checkpoints and xtrabackup_info files into the given directory.

    When using the --backup option, mariadb-backup produces a number of backup files in the target directory. Using this option, you can have mariadb-backup produce additional copies of the xtrabackup_checkpoints and xtrabackup_info files in the given directory.

    This is especially useful when using --stream for streaming output, e.g. for compression and/or encryption using external tools in combination with incremental backups, as the xtrabackup_checkpoints file necessary to determine the LSN to continue the incremental backup from is still accessible without uncompressing / decrypting the backup file first. Pass in the --extra-lsndir of the previous backup as --incremental-basedir .

    --force-non-empty-directories

    Allows --copy-back or --move-back options to use non-empty target directories.

    When using mariadb-backup with the --copy-back or --move-back options, they normally require a non-empty target directory to avoid conflicts. Using this option with either of command allows mariadb-backup to use a non-empty directory.

    Bear in mind that this option does not enable overwrites. When copying or moving files into the target directory, if mariadb-backup finds that the target file already exists, it fails with an error.

    --ftwrl-wait-query-type

    Defines the type of query allowed to complete before mariadb-backup issues the global lock.

    The --ftwrl-wait-query-type option supports the following query types. The default value is ALL.

    Option
    Description

    When mariadb-backup runs, it issues a global lock to prevent data from changing during the backup process. When it encounters a statement in the process of executing, it waits until the statement is finished before issuing the global lock. Using this option, you can modify this default behavior to ensure that it waits only for certain query types, such as for SELECT and UPDATE statements.

    --ftwrl-wait-threshold

    Defines the minimum threshold for identifying long-running queries for FTWRL.

    When mariadb-backup runs, it issues a global lock to prevent data from changing during the backup process and ensure a consistent record. If it encounters statements still in the process of executing, it waits until they complete before setting the lock. Using this option, you can set the threshold at which mariadb-backup engages FTWRL. When it --ftwrl-wait-timeout is not 0 and a statement has run for at least the amount of time given this argument, mariadb-backup waits until the statement completes or until the --ftwrl-wait-timeout expires before setting the global lock and starting the backup.

    --ftwrl-wait-timeout

    Defines the timeout to wait for queries before trying to acquire the global lock. The global lock refers to BACKUP STAGE BLOCK_COMMIT. The global lock refers to FLUSH TABLES WITH READ LOCK (FTWRL).

    When mariadb-backup runs, it acquires a global lock to prevent data from changing during the backup process and ensure a consistent record. If it encounters statements still in the process of executing, it can be configured to wait until the statements complete before trying to acquire the global lock.

    If the --ftwrl-wait-timeout is set to 0, mariadb-backup tries to acquire the global lock immediately without waiting. This is the default value.

    If the --ftwrl-wait-timeout is set to a non-zero value, then mariadb-backup waits for the configured number of seconds until trying to acquire the global lock.

    mariadb-backup exits if it can't acquire the global lock after waiting for the configured number of seconds.

    The --ftwrl-wait-timeout option specifies the maximum time that mariadb-backup will wait to obtain the global lock required to begin a consistent backup.

    this lock is acquired with BACKUP STAGE BLOCK_COMMIT.

    this lock is acquired with FLUSH TABLES WITH READ LOCK (FTWRL).

    If the lock cannot be obtained within the configured timeout, the backup process fails.

    This option helps avoid failures caused by long-running MariaDB queries that block backup locks.

    Example Errors

    When the timeout is not set appropriately, backups may fail with messages such as:

    or

    Example log excerpt:

    Originally, mariadb-backup could wait indefinitely for the lock. Starting with the fix for MDEV-20230:

    • The --ftwrl-wait-timeout option also ensures mariadb backup exits gracefully if the lock cannot be obtained within the timeout period.

    • This prevents backups from hanging when lock acquisition is blocked by long-running queries.

    When to Use

    Use --ftwrl-wait-timeout when:

    • Your workload includes long-running queries (for example, ALTER TABLE or large INSERT batches).

    • Backups sometimes fail with lock wait timeout errors.

    • You want mariadb-backup to either wait longer for the lock or exit cleanly if it cannot be obtained.

    --galera-info

    Defines whether you want to back up information about a Galera Cluster node's state.

    When this option is used, mariadb-backup creates an additional file called xtrabackup_galera_info, which records information about a Galera Cluster node's state. It records the values of the and status variables.

    You should only use this option when backing up a Galera Cluster node. If the server is not a Galera Cluster node, then this option has no effect.

    This option, when enabled and used with GTID replication, will rotate the binary logs at backup time.

    --history

    Defines whether you want to track backup history in the mysql.mariadb_backup_history table.

    When using this option, mariadb-backup records its operation in a table on the MariaDB Server. Passing a name to this option allows you group backups under arbitrary terms for later processing and analysis.

    Information is written to mysql.mariadb_backup_history.

    mariadb-backup also records this in the file.

    Defines whether you want to track backup history in the PERCONA_SCHEMA.xtrabackup_history table.

    When using this option, mariadb-backup

    -H, --host

    Defines the hostname for the MariaDB Server you want to backup.

    Using this option, you can define the hostname or IP address to use when connecting to a local MariaDB Server over TCP/IP. By default, mariadb-backup attempts to connect to localhost.

    Warning: No Remote Backups. This option does not allow you to back up a remote server. mariabackup must be run on the same server where the database files reside. The --host option is used only to establish the client connection for managing locks and retrieving metadata. The actual data files are always read from the local filesystem. Attempting to use this option to back up a remote host will result in a backup of the local machine's data, associated with the remote machine's binary log coordinates.

    --include

    This option is a regular expression to be matched against table names in databasename.tablename format. It is equivalent to the --tables option. This is only valid in innobackupex mode, which can be enabled with the --innobackupex option.

    --incremental

    Defines whether you want to take an increment backup, based on another backup. This is only valid in innobackupex mode, which can be enabled with the --innobackupex option.

    Using this option with the --backup option makes the operation incremental rather than a complete overwrite. When this option is specified, either the --incremental-lsn or --incremental-basedir options can also be given. If neither option is given, --incremental-basedir is used by default, set to the first timestamped backup directory in the backup base directory.

    If a backup is a incremental backup, then mariadb-backup records that detail in the xtrabackup_info file.

    --incremental-basedir

    Defines whether you want to take an incremental backup, based on another backup.

    Using this option with the --backup option makes the operation incremental rather than a complete overwrite. mariadb-backup only copies pages from .ibd files if they are newer than the backup in the specified directory.

    If a backup is a incremental backup, then mariadb-backup records that detail in the xtrabackup_info file.

    --incremental-dir

    Defines whether you want to take an incremental backup, based on another backup.

    Using this option with --prepare command option makes the operation incremental rather than a complete overwrite. mariadb-backup will apply .delta files and log files into the target directory.

    If a backup is a incremental backup, then mariadb-backup records that detail in the xtrabackup_info file.

    --incremental-force-scan

    Defines whether you want to force a full scan for incremental backups.

    When using mariadb-backup to perform an incremental backup, this option forces it to also perform a full scan of the data pages being backed up, even when there's bitmap data on the changes. MariaDB does not support changed page bitmaps, so this option is useless in those versions. See for more information.

    --incremental-history-name

    Defines a logical name for the backup.

    mariadb-backup can store data about its operations on the MariaDB Server. Using this option, you can define the logical name it uses in identifying the backup.

    The table it uses by default is named mysql.mariadb_backup_history. Prior to , the default table was PERCONA_SCHEMA.xtrabackup_history.

    mariadb-backup also records this in the xtrabackup_info file.

    --incremental-history-uuid

    Defines a UUID for the backup.

    mariadb-backup can store data about its operations on the MariaDB Server. Using this option, you can define the UUID it uses in identifying a previous backup to increment from. It checks --incremental-history-name, --incremental-basedir, and --incremental-lsn. If mariadb-backup fails to find a valid lsn, it generates an error.

    The table it uses is named PERCONA_SCHEMA.xtrabackup_history, but expect that name to change in future releases. See for more information.

    Table Name and Schema Changes (MariaDB 10.11):

    • MariaDB 10.11 and later: Uses mysql.mariadb_backup_history (InnoDB).

    • MariaDB 10.10 and earlier: Uses PERCONA_SCHEMA.xtrabackup_history (CSV).

    mariadb-backup also records this in the xtrabackup_info file.

    --incremental-lsn

    Defines the sequence number for incremental backups.

    Using this option, you can define the sequence number (LSN) value for --backup operations. During backups, mariadb-backup only copies .ibd pages newer than the specified values.

    Incorrect LSN values can make the backup unusable. It is impossible to diagnose this issue.

    --innobackupex

    Deprecated option.

    Use to enable innobackupex mode, which is a compatibility mode.

    --innodb

    This option has no effect. Set only for MySQL option compatibility.

    --innodb-adaptive-hash-index

    Enables InnoDB Adaptive Hash Index.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option you can explicitly enable the InnoDB Adaptive Hash Index. This feature is enabled by default for mariadb-backup. If you want to disable it, use --skip-innodb-adaptive-hash-index.

    --innodb-autoextend-increment

    Defines the increment in megabytes for auto-extending the size of tablespace file.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can set the increment in megabytes for automatically extending the size of tablespace data file in InnoDB.

    --innodb-buffer-pool-filename

    Using this option has no effect. It is available to provide compatibility with the MariaDB Server.

    --innodb-buffer-pool-size

    Defines the memory buffer size InnoDB uses the cache data and indexes of the table.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can configure the buffer pool for InnoDB operations.

    --innodb-checksum-algorithm

    innodb_checksum_algorithm has been removed.

    --innodb-data-file-path

    Defines the path to individual data files.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option you can define the path to InnoDB data files. Each path is appended to the --innodb-data-home-dir option.

    --innodb-data-home-dir

    Defines the home directory for InnoDB data files.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option you can define the path to the directory containing InnoDB data files. You can specific the files using the --innodb-data-file-path option.

    --innodb-doublewrite

    Enables doublewrites for InnoDB tables.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. When using this option, mariadb-backup improves fault tolerance on InnoDB tables with a doublewrite buffer. By default, this feature is enabled. Use this option to explicitly enable it. To disable doublewrites, use the --skip-innodb-doublewrite option.

    --innodb-encrypt-log

    Defines whether you want to encrypt InnoDB logs.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can tell mariadb-backup that you want to encrypt logs from its InnoDB activity.

    --innodb-file-io-threads

    Defines the number of file I/O threads in InnoDB.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can define the number of file I/O threads mariadb-backup uses on InnoDB tables.

    --innodb-file-per-table

    Defines whether you want to store each InnoDB table as an .ibd file.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option causes mariadb-backup to store each InnoDB table as an .ibd file in the target directory.

    --innodb-flush-method

    Defines the data flush method. Ignored from .

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can define the data flush method mariadb-backup uses with InnoDB tables.

    --innodb-io-capacity

    Defines the number of IOP's the utility can perform.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can limit the I/O activity for InnoDB background tasks. It should be set around the number of I/O operations per second that the system can handle, based on drive or drives being used.

    --innodb-log-buffer-size

    The size of the buffer that will be used for reading log during mariadb-backup --prepare. Ignored when using --innodb-log-file-mmap.

    --innodb-log-checksums

    Defines whether to include checksums in the InnoDB logs.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can explicitly set mariadb-backup to include checksums in the InnoDB logs. The feature is enabled by default. To disable it, use the --skip-innodb-log-checksums option.

    --innodb-log-checkpoint-now

    At the start of a backup, instruct the server to write out all modified pages to the data files, to minimize the size of the ib_logfile0 that needs to be copied.

    --innodb-log-file-mmap

    MariaDB starting with 10.11.10, 11.4.4: When this option is enabled, mariadb-backup will read the ib_logfile0 via a memory mapping, rather than by reading into a separately allocated buffer of --innodb-log-buffer-size.

    --innodb-log-files-in-group

    This option has no functionality in mariadb-backup. It exists for MariaDB Server compatibility.

    --innodb-log-group-home-dir

    Defines the path to InnoDB log files.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can define the path to InnoDB log files.

    --innodb-max-dirty-pages-pct

    Defines the percentage of dirty pages allowed in the InnoDB buffer pool.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can define the maximum percentage of dirty, (that is, unwritten) pages that mariadb-backup allows in the InnoDB buffer pool.

    --innodb-open-files

    Defines the number of files kept open at a time.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can set the maximum number of files InnoDB keeps open at a given time during backups.

    --innodb-page-size

    Defines the universal page size.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can define the universal page size in bytes for mariadb-backup.

    --innodb-read-io-threads

    Defines the number of background read I/O threads in InnoDB.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can set the number of I/O threads MariaDB uses when reading from InnoDB.

    --innodb-undo-directory

    Defines the directory for the undo tablespace files.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can define the path to the directory where you want MariaDB to store the undo tablespace on InnoDB tables. The path can be absolute.

    --innodb-undo-tablespaces

    Defines the number of undo tablespaces to use.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can define the number of undo tablespaces you want to use during the backup.

    --innodb-use-native-aio

    Defines whether you want to use native AI/O.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can enable the use of the native asynchronous I/O subsystem. It is only available on Linux operating systems.

    --innodb-write-io-threads

    Defines the number of background write I/O threads in InnoDB.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can set the number of background write I/O threads mariadb-backup uses.

    --kill-long-queries-timeout

    Defines the timeout for blocking queries.

    When mariadb-backup runs, it issues a FLUSH TABLES WITH READ LOCK statement. It then identifies blocking queries. Using this option you can set a timeout in seconds for these blocking queries. When the time runs out, mariadb-backup kills the queries.

    The default value is 0, which causes mariadb-backup to not attempt killing any queries.

    --kill-long-query-type

    Defines the query type the utility can kill to unblock the global lock.

    When mariadb-backup encounters a query that sets a global lock, it can kill the query in order to free up MariaDB Server for the backup. Using this option, you can choose the types of query it kills: SELECT, UPDATE, or both set with ALL. The default is ALL.

    --lock-ddl-per-table

    Prevents DDL for each table to be backed up by acquiring MDL lock on that.

    Unless the --no-lock option is also specified, conflicting DDL queries are killed at the end of backup This is done to avoid a deadlock between FLUSH TABLE WITH READ LOCK, user's DDL query (ALTER, RENAME), and MDL lock on table.

    --log

    This option has no functionality. It is set to ensure compatibility with MySQL.

    --log-bin

    Defines the base name for the log sequence.

    Using this option you, you can set the base name for mariadb-backup to use in log sequences.

    --log-copy-interval

    Defines the copy interval between checks done by the log copying thread.

    Using this option, you can define the copy interval mariadb-backup uses between checks done by the log copying thread. The given value is in milliseconds.

    --log-innodb-page-corruption

    Continue backup if InnoDB corrupted pages are found. The pages are logged in innodb_corrupted_pages and backup is finished with error. --prepare will try to fix corrupted pages. If innodb_corrupted_pages exists after --prepare in base backup directory, backup still contains corrupted pages and can not be considered as consistent.

    --move-back

    Restores the backup to the data directory.

    Using this command, mariadb-backup moves the backup from the target directory to the data directory, as defined by the --datadir option. You must stop the MariaDB Server before running this command. The data directory must be empty. If you want to overwrite the data directory with the backup, use the --force-non-empty-directories option.

    Bear in mind, before you can restore a backup, you first need to run mariadb-backup with the --prepare option. In the case of full backups, this makes the files point-in-time consistent. With incremental backups, this applies the deltas to the base backup. Once the backup is prepared, you can run --move-back to apply it to MariaDB Server.

    Running the --move-back command moves the backup files to the data directory. Use this command if you don't want to save the backup for later. If you do want to save the backup for later, use the --copy-back option.

    --mysqld

    Used internally to prepare a backup.

    --no-backup-locks

    mariadb-backup locks the database by default when it runs. This option disables support for Percona Server's backup locks.

    When backing up Percona Server, mariadb-backup would use backup locks by default. To be specific, backup locks refers to the LOCK TABLES FOR BACKUP and LOCK BINLOG FOR BACKUP statements. This option can be used to disable support for Percona Server's backup locks. This option has no effect when the server does not support Percona's backup locks.

    Deprecated and has no effect from , , and as MariaDB now always uses backup locks for better performance. See .

    --no-lock

    Disables table locks with the FLUSH TABLE WITH READ LOCK statement.

    Using this option causes mariadb-backup to disable table locks with the FLUSH TABLE WITH READ LOCK statement. Only use this option if:

    • You are not executing DML statements on non-InnoDB tables during the backup. This includes the mysql database system tables (which are MyISAM).

    • You are not executing any DDL statements during the backup.

    • You are not using the file xtrabackup_binlog_info, which is not consistent with the data when --no-lock is used. Use the file xtrabackup_binlog_pos_innodb

    If you're considering --no-lock due to backups failing to acquire locks, this may be due to incoming replication events preventing the lock. Consider using the --safe-slave-backup option to momentarily stop the replica thread. This alternative may help the backup to succeed without resorting to --no-lock.

    The --no-lock option only provides a consistent backup if the user ensures that no DDL or non-transactional table updates occur during the backup. The --no-lock option is not supported by MariaDB plc.

    --no-timestamp

    This option prevents creation of a time-stamped subdirectory of the BACKUP-ROOT-DIR given on the command line. When it is specified, the backup is done in BACKUP-ROOT-DIR instead. This is only valid in innobackupex mode, which can be enabled with the --innobackupex option.

    --no-version-check

    Disables version check.

    Using this option, you can disable mariadb-backup version check.

    --open-files-limit

    Defines the maximum number of file descriptors.

    Using this option, you can define the maximum number of file descriptors mariadb-backup reserves with setrlimit().

    --parallel

    Defines the number of threads to use for parallel data file transfer.

    Using this option, you can set the number of threads mariadb-backup uses for parallel data file transfers. By default, it is set to 1.

    -p, --password

    Defines the password to use to connect to MariaDB Server.

    When you run mariadb-backup, it connects to MariaDB Server in order to access and back up the databases and tables. Using this option, you can set the password mariadb-backup uses to access the server. To set the user, use the --user option.

    --plugin-dir

    Defines the directory for server plugins.

    Using this option, you can define the path mariadb-backup reads for MariaDB Server plugins. It only uses it during the --prepare phase to load the encryption plugin. It defaults to the plugin_dir server system variable.

    --plugin-load

    The option has been removed.

    -P, --port

    Defines the server port to connect to.

    When you run mariadb-backup, it connects to MariaDB Server in order to access and back up your databases and tables. Using this option, you can set the port the utility uses to access the server over TCP/IP. To set the host, see the --host option. Use mysql --help for more details.

    --prepare

    Prepares an existing backup to restore to the MariaDB Server.

    Files that mariadb-backup generates during --backup operations in the target directory are not ready for use on the Server. Before you can restore the data to MariaDB, you first need to prepare the backup.

    In the case of full backups, the files are not point in time consistent, since they were taken at different times. If you try to restore the database without first preparing the data, InnoDB rejects the new data as corrupt. Running mariadb-backup with the --prepare command readies the data so you can restore it to MariaDB Server. When working with incremental backups, you need to use the --prepare command and the --incremental-dir option to update the base backup with the deltas from an incremental backup.

    Once the backup is ready, you can use the --copy-back or the --move-back options to restore the backup to the server.

    --print-defaults

    Prints the utility argument list, then exits.

    Using this argument, MariaDB prints the argument list to stdout and then exits. You may find this useful in debugging to see how the options are set for the utility.

    --print-param

    Prints the MariaDB Server options needed for copy-back.

    Using this option, mariadb-backup prints to stdout the MariaDB Server options that the utility requires to run the --copy-back command option.

    --rollback-xa

    By default, mariadb-backup will not commit or rollback uncommitted XA transactions, and when the backup is restored, any uncommitted XA transactions must be manually committed using XA COMMIT or manually rolled back using XA ROLLBACK.

    MariaDB starting with

    mariadb-backup's --rollback-xa option is not present because the server has more robust ways of handling uncommitted XA transactions.

    This is an experimental option. Do not use this option in older versions. Older implementation can cause corruption of InnoDB data.

    --rsync

    Defines whether to use rsync.

    During normal operation, mariadb-backup transfers local non-InnoDB files using a separate call to cp for each file. Using this option, you can optimize this process by performing this transfer with rsync, instead.

    This option is not compatible with the --stream option.

    Deprecated and has no effect from , , and as rsync will not work on tables that are in use. See .

    --safe-slave-backup

    Stops replica SQL threads for backups.

    When running mariadb-backup on a server that uses replication, you may occasionally encounter locks that block backups. Using this option, it stops replica SQL threads and waits until the Slave_open_temp_tables in the SHOW STATUS statement is zero. If there are no open temporary tables, the backup runs, otherwise the SQL thread starts and stops until there are no open temporary tables.

    The backup fails if the Slave_open_temp_tables doesn't reach zero after the timeout period set by the --safe-slave-backup-timeout option.

    --safe-slave-backup-timeout

    Defines the timeout for replica backups.

    When running mariadb-backup on a server that uses replication, you may occasionally encounter locks that block backups. With the --safe-slave-backup option, it waits until the Slave_open_temp_tables in the SHOW STATUS statement reaches zero. Using this option, you set how long it waits. It defaults to 300.

    --secure-auth

    Refuses client connections to servers using the older protocol.

    Using this option, you can set it explicitly to refuse client connections to the server when using the older protocol, from before 4.1.1. This feature is enabled by default. Use the --skip-secure-auth option to disable it.

    --skip-innodb-adaptive-hash-index

    Disables InnoDB Adaptive Hash Index.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option you can explicitly disable the InnoDB Adaptive Hash Index. This feature is enabled by default for mariadb-backup. If you want to explicitly enable it, use --innodb-adaptive-hash-index.

    --skip-innodb-doublewrite

    Disables doublewrites for InnoDB tables.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. When doublewrites are enabled, InnoDB improves fault tolerance with a doublewrite buffer. By default this feature is turned on. Using this option you can disable it for mariadb-backup. To explicitly enable doublewrites, use the --innodb-doublewrite option.

    --skip-innodb-log-checksums

    Defines whether to exclude checksums in the InnoDB logs.

    mariadb-backup initializes its own embedded instance of InnoDB using the same configuration as defined in the configuration file. Using this option, you can set mariadb-backup to exclude checksums in the InnoDB logs. The feature is enabled by default. To explicitly enable it, use the --innodb-log-checksums option.

    --skip-secure-auth

    Refuses client connections to servers using the older protocol.

    Using this option, you can set it accept client connections to the server when using the older protocol, from before 4.1.1. By default, it refuses these connections. Use the --secure-auth option to explicitly enable it.

    --slave-info

    Prints the binary log position and the name of the primary server.

    If the server is a replica, then this option causes mariadb-backup to print the hostname of the replica's replication primary and the binary log file and position of the replica's SQL thread to stdout.

    This option also causes mariadb-backup to record this information as a CHANGE MASTER statement that can be used to set up a new server as a replica of the original server's primary after the backup has been restored. This information are written to the xtrabackup_slave_info file.

    mariadb-backup does not check if GTIDs are being used in replication. It takes a shortcut and assumes that if the gtid_slave_pos system variable is non-empty, then it writes the CHANGE MASTER statement with the MASTER_USE_GTID option set to slave_pos. Otherwise, it writes the CHANGE MASTER statement with the MASTER_LOG_FILE and MASTER_LOG_POS options using the primary's binary log file and position. See for more information.

    -S, --socket

    Defines the socket for connecting to local database.

    Using this option, you can define the UNIX domain socket you want to use when connecting to a local database server. The option accepts a string argument. For more information, see the mysql --help command.

    --ssl

    Enables TLS. By using this option, you can explicitly configure mariadb-backup to encrypt its connection with TLS when communicating with the server. You may find this useful when performing backups in environments where security is extra important or when operating over an insecure network.

    TLS is also enabled even without setting this option when certain other TLS options are set. For example, see the descriptions of the following options:

    • --ssl-ca

    • --ssl-capath

    • --ssl-cert

    • --ssl-cipher

    --ssl-ca

    Defines a path to a PEM file that should contain one or more X509 certificates for trusted Certificate Authorities (CAs) to use for TLS. This option requires that you use the absolute path, not a relative path. For example:

    This option is usually used with other TLS options. For example:

    See Secure Connections Overview: Certificate Authorities (CAs) for more information.

    This option implies the --ssl option.

    --ssl-capath

    Defines a path to a directory that contains one or more PEM files that should each contain one X509 certificate for a trusted Certificate Authority (CA) to use for TLS. This option requires that you use the absolute path, not a relative path. For example:

    This option is usually used with other TLS options. For example:

    The directory specified by this option needs to be run through the command.

    See Secure Connections Overview: Certificate Authorities (CAs) for more information

    This option implies the --ssl option.

    --ssl-cert

    Defines a path to the X509 certificate file to use for TLS. This option requires that you use the absolute path, not a relative path. For example:

    This option is usually used with other TLS options. For example:

    This option implies the --ssl option.

    --ssl-cipher

    Defines the list of permitted ciphers or cipher suites to use for TLS. For example:

    This option is usually used with other TLS options. For example:

    To determine if the server restricts clients to specific ciphers, check the ssl_cipher system variable.

    This option implies the --ssl option.

    --ssl-crl

    Defines a path to a PEM file that should contain one or more revoked X509 certificates to use for TLS. This option requires that you use the absolute path, not a relative path. For example:

    This option is usually used with other TLS options. For example:

    See Secure Connections Overview: Certificate Revocation Lists (CRLs) for more information.

    This option is only supported if mariadb-backup was built with OpenSSL. If mariadb-backup was built with yaSSL, then this option is not supported. See TLS and Cryptography Libraries Used by MariaDB for more information about which libraries are used on which platforms.

    --ssl-crlpath

    Defines a path to a directory that contains one or more PEM files that should each contain one revoked X509 certificate to use for TLS. This option requires that you use the absolute path, not a relative path. For example:

    This option is usually used with other TLS options. For example:

    The directory specified by this option needs to be run through the command.

    See Secure Connections Overview: Certificate Revocation Lists (CRLs) for more information.

    This option is only supported if mariadb-backup was built with OpenSSL. If mariadb-backup was built with yaSSL, then this option is not supported. See TLS and Cryptography Libraries Used by MariaDB for more information about which libraries are used on which platforms.

    --ssl-key

    Defines a path to a private key file to use for TLS. This option requires that you use the absolute path, not a relative path. For example:

    This option is usually used with other TLS options. For example:

    This option implies the --ssl option.

    --ssl-verify-server-cert

    Enables server certificate verification. This option is disabled by default.

    This option is usually used with other TLS options. For example:

    --stream

    Streams backup files to stdout.

    Using this command option, you can set mariadb-backup to stream the backup files to stdout in the given format. Currently, the supported format is xbstream.

    To extract all files from the xbstream archive into a directory use the mbstream utility

    If a backup is streamed, then mariadb-backup records the format in the xtrabackup_info file.

    --tables

    Defines the tables you want to include in the backup.

    Using this option, you can define what tables you want mariadb-backup to back up from the database. The table values are defined using Regular Expressions (regex). To define the tables you want to exclude from the backup, see the --tables-exclude option.

    In the example, nodes_* matches tables named nodes, nodes_, nodes__, and so forth, because * means zero or more occurrences of the previous character (_).

    If instead you want to back up all tables whose names start with nodes, the regular expression is ^nodes., and to exclude tables starting with nodes_tmp, the expression is ^nodes_tmp.. (Notice the trailing period (.); it means zero or more occurrences of characters following nodes.) The command looks like this:

    In that example, some of the tables included via the --tables option are excluded by --tables-excludes. That works because --tables-exclude takes precedence over --tables.

    You can specify multiple table name regex patterns as a comma-separated list, for both the --tables and the --tables-exclude options.

    The following command backs up all tables in the test1 and test2 databases, except the exclude_table table in the test2 database, and stores the backup files under /path/to/backups/:

    The and options, if used, take precedence over --tables and --tables-exclude. That is, they can filter out tables, which are then not "visible" to the latter mentioned options.

    If a backup is a partial backup, mariadb-backup records that detail in the xtrabackup_info file.

    --tables-exclude

    Defines the tables you want to exclude from the backup.

    Using this option, you can define what tables you want mariadb-backup to exclude from the backup. The table values are defined using Regular Expressions. To define the tables you want to include from the backup, see the --tables option.

    See for examples and hints regarding regular expressions.

    If a backup is a partial backup, mariadb-backup records that detail in the xtrabackup_info file.

    --tables-file

    Defines path to file with tables for backups.

    Using this option, you can set a path to a file listing the tables you want to back up. mariadb-backup iterates over each line in the file. The format is database.table.

    If a backup is a partial backup, then mariadb-backup will record that detail in the xtrabackup_info file.

    --target-dir

    Defines the destination directory.

    Using this option you can define the destination directory for the backup. mariadb-backup writes all backup files to this directory. mariadb-backup will create the directory, if it does not exist (but it does not create the full path recursively, i.e. at least parent directory if the --target-dir must exist.

    --throttle

    Defines the limit for I/O operations per second in IOS values.

    Using this option, you can set a limit on the I/O operations mariadb-backup performs per second in IOS values. It is only used during the --backup option.

    --tls-version

    This option accepts a comma-separated list of TLS protocol versions. A TLS protocol version is only enabled if it is present in this list. All other TLS protocol versions will not be permitted. For example:

    This option is usually used with other TLS options. For example:

    See Secure Connections Overview: TLS Protocol Versions for more information.

    -t, --tmpdir

    Defines path for temporary files.

    Using this option, you can define the path to a directory mariadb-backup uses in writing temporary files. If you want to use more than one, separate the values by a semicolon (that is, ;). When passing multiple temporary directories, it cycles through them using round-robin.

    --use-memory

    Defines the buffer pool size that is used during the prepare stage.

    Using this option, you can define the buffer pool size for mariadb-backup. Use it instead of buffer_pool_size.

    --user

    Defines the username for connecting to the MariaDB Server.

    When mariadb-backup runs, it connects to the specified MariaDB Server to get its backups. Using this option, you can define the database user used for authentication. Starting from , , , , , , , if the --user option is omitted, the user name is detected from the OS.

    --verbose

    Displays verbose output.

    --version

    Prints the mariadb-backup version information to stdout.

    This page is licensed: CC BY-SA / Gnu FDL

    records its operation in a table on the MariaDB Server. Passing a name to this option allows you group backups under arbitrary terms for later processing and analysis.

    Information is written to PERCONA_SCHEMA.xtrabackup_history.

    mariadb-backup also records this in the xtrabackup_info file.

    instead.
  • All tables you're backing up use the InnoDB storage engine.

  • --ssl-key

    OFF

    Disables the retrieval of binary log information

    ON

    Enables the retrieval of binary log information, performs locking where available to ensure consistency

    LOCKLESS

    Unsupported option

    AUTO

    Enables the retrieval of binary log information using ON or LOCKLESS where supported

    quicklz

    Uses the QuickLZ compression algorithm

    ALL

    Waits until all queries complete before issuing the global lock

    SELECT

    Waits until SELECT statements complete before issuing the global lock

    UPDATE

    Waits until UPDATE statements complete before issuing the global lock

    --innobackupex
    --backup
    --prepare
    --incremental-dir
    --copy-back
    --move-back
    --target-dir
    --incremental-basedir
    --copy-back
    --move-back
    Full Backup and Restore
    Incremental Backup and Restore
    xtrabackup_binlog_pos_innodb
    QuickLZ
    Using Encryption and Compression Tools With mariadb-backup
    xtrabackup_info
    --compress
    --compress
    --compress-threads
    --compress
    --compress
    --compress-chunk-size
    --databases-file option
    mariadb-backup Overview: Server Option Groups
    mariadb-backup Overview: Client Option Groups
    MDEV-13466
    mariadb_backup_info
    MDEV-18985
    MariaDB 10.11
    MDEV-19246
    MariaDB 10.11.8
    MDEV-32932
    10.5
    MariaDB 10.11.8
    MDEV-32932
    MDEV-19264
    openssl rehash
    openssl rehash
    --databases
    --databases-exclude
    the --tables option
    MariaDB 10.6.17
    MariaDB 10.11.7
    MariaDB 11.4.1
    wsrep_local_state_uuid
    wsrep_last_committed
    mariadb-backup --innobackupex --apply-log
    mariadb-backup --backup 
          --target-dir /path/to/backup \
          --user user_name --password user_passwd
    --binlog-info[=OFF | ON | LOCKLESS | AUTO]
    mariadb-backup --binlog-info --backup
    mariadb-backup --close-files --prepare
    --compress[=compression_algorithm]
    mariadb-backup --compress --backup
    --compress-chunk-size=#
    mariadb-backup --backup --compress \
         --compress-threads=12 --compress-chunk-size=5M
    --compress-threads=#
    mariadb-backup --compress --compress-threads=12 --backup
    mariadb-backup --copy-back --force-non-empty-directories
    mariadb-backup --core-file --backup
    --databases="database[.table][ database[.table] ...]"
    mariadb-backup --backup \
          --databases="example.table1 example.table2"
    --databases-exclude="database[.table][ database[.table] ...]"
    mariadb-backup --backup \
          --databases="example" \
          --databases-exclude="example.table1 example.table2"
    --databases-file="/path/to/database-file"
    database[.table]
    cat main-backup
    example1
    example2.table1
    example2.table2
    mariadb-backup --backup --databases-file=main-backup
    --datadir=PATH
    mariadb-backup --backup -h /var/lib64/mysql
    mariadb-backup --compress --backup
    mariadb-backup --decompress
    --defaults-extra-file=/path/to/config
    mariadb-backup --backup \
          --defaults-file-extra=addition-config.cnf \
          --defaults-file=config.cnf
    --defaults-file=/path/to/config
    mariadb-backup --backup \
         --defaults-file=config.cnf
    --defaults-group="name"
    [mariadb-backup]
    compress_threads = 12
    compress_chunk_size = 64K
    mariadb-backup --compress --backup
    mariadb-backup --prepare --export
    --extra-lsndir=PATH
    mariadb-backup --extra-lsndir=extras/ --backup
    mariadb-backup --force-non-empty-directories --copy-back
    --ftwrl-wait-query-type=[ALL | UPDATE | SELECT]
    mariadb-backup --backup  \
          --ftwrl-wait-query-type=UPDATE
    --ftwrl-wait-threshold=#
    mariadb-backup --backup \
         --ftwrl-wait-timeout=90 \
         --ftwrl-wait-threshold=30
    --ftwrl-wait-timeout=#
    mariadb-backup --backup \
          --ftwrl-wait-query-type=UPDATE \
          --ftwrl-wait-timeout=5
    Unable to obtain lock. Please try again later.
    FATAL ERROR: failed to execute query BACKUP STAGE START:
    Lock wait timeout exceeded; try restarting transaction
    [00] 2022-02-08 15:43:25 Unable to obtain lock. Please try again later.
    [00] 2022-02-08 15:43:25 Error on BACKUP STAGE START query execution
    mariabackup: Stopping log copying thread.
    mariadb-backup --backup --galera-info
    --history[=name]
    mariadb-backup --backup --history=backup_all
    --history[=name]
    --host=name
    mariadb-backup --backup \
          --host="example.com"
    mariadb-backup --innobackupex --incremental
    mariadb-backup --innobackupex --backup --incremental \
         --incremental-basedir=/data/backups \
         --target-dir=/data/backups
    --incremental-basedir=PATH
    mariadb-backup --backup \
         --incremental-basedir=/data/backups \
         --target-dir=/data/backups
    --increment-dir=PATH
    mariadb-backup --prepare \
          --increment-dir=backups/
    mariadb-backup --backup \
         --incremental-basedir=/path/to/target \
         --incremental-force-scan
    --incremental-history-name=name
    mariadb-backup --backup \
         --incremental-history-name=morning_backup
    --incremental-history-uuid=name
    mariadb-backup --backup \
          --incremental-history-uuid=main-backup012345678
    --incremental-lsn=name
    mariadb-backup --innobackupex
    mariadb-backup --backup \
          --innodb-adaptive-hash-index
    --innodb-autoextend-increment=36
    mariadb-backup --backup \
         --innodb-autoextend-increment=35
    --innodb-buffer-pool-size=124M
    mariadb-backup --backup \
          --innodb-buffer-pool-size=124M
    --innodb-data-file-path=/path/to/file
    mariadb-backup --backup \
         --innodb-data-file-path=ibdata1:13M:autoextend \
         --innodb-data-home-dir=/var/dbs/mysql/data
    --innodb-data-home-dir=PATH
    mariadb-backup --backup \
         --innodb-data-file-path=ibdata1:13M:autoextend \
         --innodb-data-home-dir=/var/dbs/mysql/data
    mariadb-backup --backup \
         --innodb-doublewrite
    --innodb-file-io-threads=#
    mariadb-backup --backup \
         --innodb-file-io-threads=5
    --innodb-flush-method=fdatasync 
                         | O_DSYNC 
                         | O_DIRECT 
                         | O_DIRECT_NO_FSYNC 
                         | ALL_O_DIRECT
    mariadb-backup --backup \
          --innodb-flush-method==_DIRECT_NO_FSYNC
    --innodb-io-capacity=#
    mariadb-backup --backup \
         --innodb-io-capacity=200
    mariadb-backup --backup \
          --innodb-log-checksums
    mariadb-backup --backup \
          --innodb-log-checkpoint-now
    --innodb-log-group-home-dir=PATH
    mariadb-backup --backup \
         --innodb-log-group-home-dir=/path/to/logs
    --innodb-max-dirty-pages-pct=#
    mariadb-backup --backup \
         --innodb-max-dirty-pages-pct=80
    --innodb-open-files=#
    mariadb-backup --backup \
          --innodb-open-files=10
    --innodb-page-size=#
    mariadb-backup --backup \
         --innodb-page-size=16k
    --innodb-read-io-threads=#
    mariadb-backup --backup \
          --innodb-read-io-threads=4
    --innodb-undo-directory=PATH
    mariadb-backup --backup \
         --innodb-undo-directory=/path/to/innodb_undo
    --innodb-undo-tablespaces=#
    mariadb-backup --backup \
          --innodb-undo-tablespaces=10
    mariadb-backup --backup \
          --innodb-use-native-aio
    --innodb-write-io-threads=#
    mariadb-backup --backup \
         --innodb-write-io-threads=4
    --kill-long-queries-timeout=#
    mariadb-backup --backup \
          --kill-long-queries-timeout=10
    --kill-long-query-type=ALL | UPDATE | SELECT
    mariadb-backup --backup \
          --kill-long-query-type=UPDATE
    --log-bin[=name]
    --log-copy-interval=#
    mariadb-backup --backup \
          --log-copy-interval=50
    mariadb-backup --move-back \
          --datadir=/var/mysql
    mariadb-backup --backup --no-backup-locks
    mariadb-backup --backup --no-lock
    mariadb-backup --backup --no-version-check
    --open-files-limit=#
    mariadb-backup --backup \
          --open-files-limit=
    --parallel=#
    --password=passwd
    mariadb-backup --backup \
          --user=root \
          --password=root_password
    --plugin-dir=PATH
    mariadb-backup --backup \
          --plugin-dir=/var/mysql/lib/plugin
    --port=#
    mariadb-backup --backup \
          --host=192.168.11.1 \
          --port=3306
    mariadb-backup --prepare
    mariadb-backup --print-defaults
    mariadb-backup --print-param
    mariadb-backup --backup --rsync
    mariadb-backup --backup \
          --safe-slave-backup \
          --safe-slave-backup-timeout=500
    --safe-slave-backup-timeout=#
    mariadb-backup --backup \
          --safe-slave-backup \
          --safe-slave-backup-timeout=500
    mariadb-backup --backup --secure-auth
    mariadb-backup --backup \
          --skip-innodb-adaptive-hash-index
    mariadb-backup --backup \
         --skip-innodb-doublewrite
    mariadb-backup --backup --skip-secure-auth
    mariadb-backup --slave-info
    --socket=name
    mariadb-backup --backup \
          --socket=/var/mysql/mysql.sock
    --ssl-ca=/etc/my.cnf.d/certificates/ca.pem
    mariadb-backup --backup \
       --ssl-cert=/etc/my.cnf.d/certificates/client-cert.pem \
       --ssl-key=/etc/my.cnf.d/certificates/client-key.pem \
       --ssl-ca=/etc/my.cnf.d/certificates/ca.pem
    --ssl-capath=/etc/my.cnf.d/certificates/ca/
    mariadb-backup --backup \
       --ssl-cert=/etc/my.cnf.d/certificates/client-cert.pem \
       --ssl-key=/etc/my.cnf.d/certificates/client-key.pem \
       --ssl-ca=/etc/my.cnf.d/certificates/ca.pem \
       --ssl-capath=/etc/my.cnf.d/certificates/ca/
    --ssl-cert=/etc/my.cnf.d/certificates/client-cert.pem
    mariadb-backup --backup \
       --ssl-cert=/etc/my.cnf.d/certificates/client-cert.pem \
       --ssl-key=/etc/my.cnf.d/certificates/client-key.pem \
       --ssl-ca=/etc/my.cnf.d/certificates/ca.pem
    --ssl-cipher=name
    mariadb-backup --backup \
       --ssl-cert=/etc/my.cnf.d/certificates/client-cert.pem \
       --ssl-key=/etc/my.cnf.d/certificates/client-key.pem \
       --ssl-ca=/etc/my.cnf.d/certificates/ca.pem \
       --ssl-cipher=TLSv1.2
    --ssl-crl=/etc/my.cnf.d/certificates/crl.pem
    mariadb-backup --backup \
       --ssl-cert=/etc/my.cnf.d/certificates/client-cert.pem \
       --ssl-key=/etc/my.cnf.d/certificates/client-key.pem \
       --ssl-ca=/etc/my.cnf.d/certificates/ca.pem \
       --ssl-crl=/etc/my.cnf.d/certificates/crl.pem
    --ssl-crlpath=/etc/my.cnf.d/certificates/crl/
    mariadb-backup --backup \
       --ssl-cert=/etc/my.cnf.d/certificates/client-cert.pem \
       --ssl-key=/etc/my.cnf.d/certificates/client-key.pem \
       --ssl-ca=/etc/my.cnf.d/certificates/ca.pem \
       --ssl-crlpath=/etc/my.cnf.d/certificates/crl/
    --ssl-key=/etc/my.cnf.d/certificates/client-key.pem
    mariadb-backup --backup \
       --ssl-cert=/etc/my.cnf.d/certificates/client-cert.pem \
       --ssl-key=/etc/my.cnf.d/certificates/client-key.pem \
       --ssl-ca=/etc/my.cnf.d/certificates/ca.pem
    mariadb-backup --backup \
       --ssl-cert=/etc/my.cnf.d/certificates/client-cert.pem \
       --ssl-key=/etc/my.cnf.d/certificates/client-key.pem \
       --ssl-ca=/etc/my.cnf.d/certificates/ca.pem \
       --ssl-verify-server-cert
    --stream=xbstream
    mariadb-backup --stream=xbstream > backup.xb
    mbstream  -x < backup.xb
    --tables=REGEX
    mariadb-backup --backup \
         --databases=example \
         --tables=nodes_* \
         --tables-exclude=nodes_tmp
    mariadb-backup --backup \
         --databases=example \
         --tables=^nodes. \
         --tables-exclude=^nodes_tmp.
    mariadb-backup --backup \
         --tables=test1[.].*,test2[.].* \
         --tables-exclude=^test2[.]exclude_table
         --target-dir=/path/to/backups/
    --tables-exclude=REGEX
    --tables-file=/path/to/file
    mariadb-backup --backup \
         --databases=example \
         --tables-file=/etc/mysql/backup-file
    --target-dir=/path/to/target
    mariadb-backup --backup \
           --target-dir=/data/backups
    --throttle=#
    --tls-version="TLSv1.2,TLSv1.3"
    mariadb-backup --backup \
       --ssl-cert=/etc/my.cnf.d/certificates/client-cert.pem \
       --ssl-key=/etc/my.cnf.d/certificates/client-key.pem \
       --ssl-ca=/etc/my.cnf.d/certificates/ca.pem \
       --tls-version="TLSv1.2,TLSv1.3"
    --tmpdir=/path/tmp[;/path/tmp...]
    mariadb-backup --backup \
         --tmpdir=/data/tmp;/tmp
    --use-memory=124M
    mariadb-backup --prepare \
          --use-memory=124M
    --user=name
    -u name
    mariadb-backup --backup \
          --user=root \
          --password=root_passwd
    mariadb-backup --verbose
    mariadb-backup --version
    mariadb-backup --backup --history=backup_all
    Replication Compatibility
    MariaDB 11.3.2
    MariaDB 10.5
    MariaDB 10.5.9
    MariaDB 10.4.18
    MariaDB 10.3.28
    MariaDB 10.2.36
    MariaDB 10.4.12
    MariaDB 10.3.22
    MariaDB 10.2.31
    MariaDB 10.1.44
    MariaDB 10.4.8
    MariaDB 10.3.18
    MariaDB 10.2.27
    MariaDB 10.3.6
    MariaDB 10.2.14
    MariaDB 10.1.33
    MariaDB 10.3.3
    MariaDB 10.2.10
    MariaDB 10.1.29
    MariaDB 10.3.2
    MariaDB 10.2.9
    MariaDB 10.1.28
    MariaDB 10.3.1
    MariaDB 10.2.8
    MariaDB 10.1.24
    MariaDB 10.3.0
    MariaDB 10.2.5
    MariaDB 10.1.22
    MariaDB 10.2.4
    MariaDB 10.1.21
    MariaDB 10.2.2
    MariaDB 10.1.17
    MariaDB 10.2.0
    MariaDB 10.1.13
    MariaDB 10.1.10
    MariaDB 10.1.9
    ColumnStore
    ColumnStore
    MariaDB 10.1
    MariaDB 10.1.48
    MariaDB 10.2.35
    MariaDB 10.3.26
    MariaDB 10.4.16
    MariaDB 10.5.7
    MariaDB 10.1.47
    MariaDB 10.2.34
    MariaDB 10.3.25
    MariaDB 10.4.15
    MariaDB 10.5.6
    ColumnStore
    ColumnStore
    ColumnStore
    MariaDB 5.1
    10.5
    10.5
    MariaDB 11.8
    Oracle mode
    MariaDB 10.0.4
    MariaDB 10.1.31
    MariaDB 10.2.13
    MariaDB 10.4.14
    MariaDB 10.5.4
    MariaDB 10.1.38
    MariaDB 10.2.22
    MariaDB 10.3.13
    MariaDB 10.1.38
    MariaDB 10.2.22
    MariaDB 10.3.13
    MariaDB 10.4.7
    MariaDB 10.4.6
    MariaDB 10.2.0
    MariaDB 10.2.3
    MariaDB 10.5.20
    MariaDB Enterprise Server 11.4.8-5
    MariaDB Enterprise Server 11.8
    MariaDB 10.2.4
    MariaDB 10.2.3
    MariaDB 10.2.0
    release criteria
    MariaDB 5.1
    MariaDB 5.2
    MariaDB 5.1
    MariaDB versus MySQL
    MariaDB 5.2
    MariaDB 10.5
    MariaDB 11.0
    MariaDB 11.0.6
    MariaDB 11.1.5
    MariaDB 11.2.4
    MariaDB 11.0.6
    MariaDB 11.1.5
    MariaDB 11.2.4
    MariaDB 11.0.5
    MariaDB 11.1.4
    MariaDB 11.2.3
    MariaDB 11.3.2