Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Grasp the basics of using MariaDB Server. This section introduces fundamental concepts, common SQL commands, and essential operations to get you started with your database.
Learn effective data handling in MariaDB Server. This section covers data types, storage engines, data manipulation, and best practices for managing your information efficiently.
Learn how to effectively use MariaDB Server. This section covers SQL statements, built-in functions, client utilities, and best practices for daily database operations.
This page lists the most important SQL statements and contains links to their documentation pages. If you need a basic tutorial on how to use the MariaDB database server and how to execute simple commands, see .
Also see for examples of commonly-used queries.
is used to create a new, empty database.
is used to completely destroy an existing database.
is used to select a default database.
is used to create a new table, which is where your data is actually stored.
is used to modify an existing table's definition.
is used to completely destroy an existing table.
shows the structure of a table.
is used when you want to read (or select) your data.
is used when you want to add (or insert) new data.
is used when you want to change (or update) existing data.
is used when you want to remove (or delete) existing data.
is used when you want to add or change (or replace) new or existing data.
is used when you want to empty (or delete) all data from the template.
is used to begin a transaction.
is used to apply changes and end transaction.
is used to discard changes and end transaction.
The first version of this article was copied, with permission, from on 2012-10-05.
This page is licensed: CC BY-SA / Gnu FDL
Manage tables in MariaDB Server. This section details creating, altering, and dropping tables, along with understanding data types and storage engines for optimal database design.
Optimize large tables in MariaDB Server with partitioning. Learn how to divide tables into smaller, manageable parts for improved performance, easier maintenance, and scalability.
Learn to back up and restore MariaDB Enterprise Server. This section details specialized tools and methods for ensuring robust data protection and efficient recovery in enterprise environments.
Utilize stored functions in MariaDB Server. This section details creating, using, and managing user-defined functions to extend SQL capabilities and streamline data manipulation.
Learn to back up and restore MariaDB Server databases. This section covers essential strategies and tools to ensure data safety and quick recovery from potential data loss.
Master stored procedures in MariaDB Server. This section covers creating, executing, and managing these powerful routines to encapsulate complex logic and improve application performance.
Explore different partitioning types for MariaDB Server tables. Understand range, list, hash, and key partitioning to optimize data management and improve query performance.
Automate tasks in MariaDB Server with stored routines. Learn to create and manage stored procedures and functions for enhanced database efficiency and code reusability.
Understand InnoDB's architecture for MariaDB Enterprise Server. This section details its components and their interactions, focusing on performance, scalability, and reliability for enterprise workloa
Understand MariaDB Server's storage engines. Explore the features and use cases of InnoDB, Aria, MyISAM, and other engines to choose the best option for your specific data needs.
Discover InnoDB, the default storage engine for MariaDB Server. Learn about its transaction-safe capabilities, foreign key support, and high performance for demanding workloads.
CREATE DATABASE mydb;
USE mydb;
CREATE TABLE mytable ( id INT PRIMARY KEY, name VARCHAR(20) );
INSERT INTO mytable VALUES ( 1, 'Will' );
INSERT INTO mytable VALUES ( 2, 'Marry' );
INSERT INTO mytable VALUES ( 3, 'Dean' );
SELECT id, name FROM mytable WHERE id = 1;
UPDATE mytable SET name = 'Willy' WHERE id = 1;
SELECT id, name FROM mytable;
DELETE FROM mytable WHERE id = 1;
SELECT id, name FROM mytable;
DROP DATABASE mydb;
SELECT count(1) FROM mytable; gives the number of records IN the TABLE
Following a few conventions makes finding errors in queries a lot easier, especially when you ask for help from people who might know SQL, but know nothing about your particular schema. A query easy to read is a query easy to debug. Use whitespace to group clauses within the query. Choose good table and field aliases to add clarity, not confusion. Choose the syntax that supports the query's meaning.
A query hard to read is a query hard to debug. White space is free. New lines and indentation make queries easy to read, particularly when constructing a query inside a scripting language, where variables are interspersed throughout the query.
There is a syntax error in the following. How fast can you find it?
SELECT u.id, u.name, alliance.ally FROM users u JOIN alliance ON
(u.id=alliance.userId) JOIN team ON (alliance.teamId=team.teamId
WHERE team.teamName='Legionnaires' AND u.online=1 AND ((u.subscription='paid'
AND u.paymentStatus='current') OR u.subscription='free') ORDER BY u.name;
Here's the same query, with correct use of whitespace. Can you find the error faster?
SELECT
u.id
, u.name
, alliance.ally
FROM
users u
JOIN alliance ON (u.id = alliance.userId)
JOIN team ON (alliance.teamId = team.teamId
WHERE
team.teamName = 'Legionnaires'
AND u.online = 1
AND (
(u.subscription = 'paid' AND u.paymentStatus = 'current')
OR
u.subscription = 'free'
)
ORDER BY
u.name;
Even if you don't know SQL, you might still have caught the missing ')' following team.teamId.
The exact formatting style you use isn't so important. You might like commas in the select list to follow expressions, rather than precede them. You might indent with tabs or with spaces. Adherence to some particular form is not important. Legibility is the only goal.
Aliases allow you to rename tables and fields for use within a query. This can be handy when the original names are very long, and is required for self joins and certain subqueries. However, poorly chosen aliases can make a query harder to debug, rather than easier. Aliases should reflect the original table name, not an arbitrary string.
Bad:
SELECT *
FROM
financial_reportQ_1 AS a
JOIN sales_renderings AS b ON (a.salesGroup = b.groupId)
JOIN sales_agents AS c ON (b.groupId = c.group)
WHERE
b.totalSales > 10000
AND c.id != a.clientId
As the list of joined tables and the WHERE
clause grow, it becomes necessary to repeatedly look back to the top of the query to see to which table any given alias refers.
Better:
SELECT *
FROM
financial_report_Q_1 AS frq1
JOIN sales_renderings AS sr ON (frq1.salesGroup = sr.groupId)
JOIN sales_agents AS sa ON (sr.groupId = sa.group)
WHERE
sr.totalSales > 10000
AND sa.id != frq1.clientId
Each alias is just a little longer, but the table initials give enough clues that anyone familiar with the database only need see the full table name once, and can generally remember which table goes with which alias while reading the rest of the query.
The manual warns against using the JOIN
condition (that is, the ON clause) for restricting rows. Some queries, particularly those using implicit joins, take the opposite extreme - all join conditions are moved to the WHERE
clause. In consequence, the table relationships are mixed with the business
logic.
Bad:
SELECT *
FROM
family,
relationships
WHERE
family.personId = relationships.personId
AND relationships.relation = 'father'
Without digging through the WHERE clause, it is impossible to say what links the two tables.
Better:
SELECT *
FROM
family
JOIN relationships ON (family.personId = relationships.personId)
WHERE
relationships.relation = 'father'
The relation between the tables is immediately obvious. The WHERE clause is left to limit rows in the result set.
Compliance with such a restriction negates the use of the comma operator to join tables. It is a small price to pay. Queries should be written using the explicit JOIN keyword anyway, and the two should never be mixed (unless you like rewriting all your queries every time a new version changes operator precedence).
Syntax errors are among the easiest problems to solve. MariaDB provides an error message showing the exact point where the parser became confused. Check the query, including a few words before the phrase shown in the error message. Most syntax and parsing errors are obvious after a second look, but some are more elusive, especially when the error text seems empty, points to a valid keyword, or seems to error on syntax that appears exactly correct.
Most syntax errors are easy to interpret. The error generally details the exact source of the trouble. A careful look at the query, with the error message in mind, often reveals an obvious mistake, such as misspelled field names, a missing 'AND
', or an extra closing parenthesis. Sometimes the error is a little
less helpful. A frequent, less-than-helpful message:
ERROR 1064: You have an error in your SQL syntax; check the manual that corresponds to your
MariaDB server version for the right syntax to use near ' ' at line 1
The empty ' ' can be disheartening. Clearly there is an error, but where? A good place to look is at the end of the query. The ' ' suggests that the parser reached the end of the statement while still expecting some syntax token to appear.
Check for missing closers, such as ' and ):
SELECT * FROM someTable WHERE field = 'value
Look for incomplete clauses, often indicated by an exposed comma:
SELECT * FROM someTable WHERE field = 1 GROUP BY id,
MariaDB allows table and field names and aliases that are also reserved words. To prevent ambiguity, such names must be enclosed in backticks (`):
SELECT * FROM actionTable WHERE `DELETE` = 1;
If the syntax error is shown near one of your identifiers, check if it appears on the reserved word list.
A text editor with color highlighting for SQL syntax helps to find these errors. When you enter a field name, and it shows up in the same color as the SELECT keyword, you know something is amiss. Some common culprits:
DESC is a common abbreviation for "description" fields. It means "descending" in a MariaDB ORDER clause.
DATE, TIME, and TIMESTAMP are all common field names. They are also field types.
ORDER appears in sales applications. MariaDB uses it to specify sorting for results.
Some keywords are so common that MariaDB makes a special allowance to use them unquoted. My advice: don't. If it's a keyword, quote it.
As MariaDB adds new features, the syntax must change to support them. Most of the time, old syntax will work in newer versions of MariaDB. One notable exception is the change in precedence of the comma operator relative to the JOIN keyword in version 5.0. A query that used to work, such as
SELECT * FROM a, b JOIN c ON a.x = c.x;
will now fail.
More common, however, is an attempt to use new syntax in an old version. Web hosting companies are notoriously slow to upgrade MariaDB, and you may find yourself using a version several years out of date. The result can be very frustrating when a query that executes flawlessly on your own workstation, running a recent installation, fails completely in your production environment.
This query fails in any version of MySQL prior to 4.1, when subqueries were added to the server:
SELECT * FROM someTable WHERE someId IN (SELECT id FROM someLookupTable);
This query fails in some early versions of MySQL, because the JOIN syntax did not originally allow an ON clause:
SELECT * FROM tableA JOIN tableB ON tableA.x = tableB.y;
Always check the installed version of MariaDB, and read the section of the manual relevant for that version. The manual usually indicates exactly when particular syntax became available for use.
The initial version of this article was copied, with permission, from Basic_Debugging on 2012-10-05.
This page is licensed: CC BY-SA / Gnu FDL
Get an overview of MariaDB Backup. This section introduces the hot physical backup tool, explaining its capabilities for efficient and consistent backups of your MariaDB Server.
The doublewrite buffer was implemented to recover from half-written pages. This can happen when there's a power failure while InnoDB is writing a page to disk. On reading that page, InnoDB can discover the corruption from the mismatch of the page checksum. However, in order to recover, an intact copy of the page would be needed.
The double write buffer provides such a copy.
Whenever InnoDB flushes a page to disk, it is first written to the double write buffer. Only when the buffer is safely flushed to disk will InnoDB write the page to the final destination. When recovering, InnoDB scans the double write buffer and for each valid page in the buffer checks if the page in the data file is valid too.
To turn off the doublewrite buffer, set the system variable to 0
. This is safe on filesystems that write pages atomically - that is, a page write fully succeeds or fails. But with other filesystems, it is not recommended for production systems. An alternative option is atomic writes. See for more details.
This page is licensed: CC BY-SA / Gnu FDL
Learn how to connect to MariaDB Server. This section details various methods and tools for establishing secure and efficient connections to your database from different applications and environments.
MariaDB is a database system, a database server. To interface with the MariaDB server, you can use a client program, or you can write a program or script with one of the popular programming languages (e.g., PHP) using an API (Application Programming Interface) to interface with the MariaDB server. For the purposes of this article, we will focus on using the default client that comes with MariaDB called mariadb
. With this client, you can either enter queries from the command-line, or you can switch to a terminal, that is to say, monitor mode. To start, we'll use the latter.
From the Linux command-line, you would enter the following to log in as the root user and to enter monitor mode:
mariadb -u root -p -h localhost
The -u
option is for specifying the user name. You would replace root
here if you want to use a different user name. This is the MariaDB user name, not the Linux user name. The password for the MariaDB user root
will probably be different from the Linux user root
. Incidentally, it's not a good security practice to use the root
user unless you have a specific administrative task to perform for which only root
has the needed privileges.
The -p
option above instructs the mariadb
client to prompt you for the password. If the password for the root
user hasn't been set yet, then the password is blank and you would just hit [Enter] when prompted. The -h
option is for specifying the host name or the IP address of the server. This would be necessary if the client is running on a different machine than the server. If you've secure-shelled into the server machine, you probably won't need to use the host option. In fact, if you're logged into Linux as root
, you won't need the user option—the -p
is all you'll need. Once you've entered the line above along with the password when prompted, you will be logged into MariaDB through the client. To exit, type quit or exit and press [Enter].
In order to be able to add and to manipulate data, you first have to create a database structure. Creating a database is simple. You would enter something like the following from within the mariadb client:
CREATE DATABASE bookstore;
USE bookstore;
This very minimal, first SQL statement will create a sub-directory called bookstore on the Linux filesystem in the directory which holds your MariaDB data files. It won't create any data, obviously. It'll just set up a place to add tables, which will in turn hold data. The second SQL statement above will set this new database as the default database. It will remain your default until you change it to a different one or until you log out of MariaDB.
The next step is to begin creating tables. This is only a little more complicated. To create a simple table that will hold basic data on books, we could enter something like the following:
CREATE TABLE books (
isbn CHAR(20) PRIMARY KEY,
title VARCHAR(50),
author_id INT,
publisher_id INT,
year_pub CHAR(4),
description TEXT );
This SQL statement creates the table books with six fields, or rather columns. The first column (isbn) is an identification number for each row—this name relates to the unique identifier used in the book publishing business. It has a fixed-width character type of 20 characters. It will be the primary key column on which data will be indexed. The column data type for the book title is a variable width character column of fifty characters at most. The third and fourth columns will be used for identification numbers for the author and the publisher. They are integer data types. The fifth column is used for the publication year of each book. The last column is for entering a description of each book. It's a TEXT data type, which means that it's a variable width column and it can hold up to 65535 bytes of data for each row. There are several other data types that may be used for columns, but this gives you a good sampling.
To see how the table we created looks, enter the following SQL statement:
DESCRIBE books;
+--------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+-------------+------+-----+---------+-------+
| isbn | char(20) | NO | PRI | NULL | |
| title | varchar(50) | YES | | NULL | |
| author_id | int(11) | YES | | NULL | |
| publisher_id | int(11) | YES | | NULL | |
| year_pub | char(4) | YES | | NULL | |
| description | text | YES | | NULL | |
+--------------+-------------+------+-----+---------+-------+
To change the settings of a table, you can use the ALTER TABLE statement. I'll cover that statement in another article. To delete a table completely (including its data), you can use the DROP TABLE statement, followed by the table name. Be careful with this statement since it's not reversible.
The next table we'll create for our examples is the authors table to hold author information. This table will save us from having to enter the author's name and other related data for each book written by each author. It also helps to ensure consistency of data: there's less chance of inadvertent spelling deviations.
CREATE TABLE authors
(author_id INT AUTO_INCREMENT PRIMARY KEY,
name_last VARCHAR(50),
name_first VARCHAR(50),
country VARCHAR(50) );
We'll join this table to the books table as needed. For instance, we would use it when we want a list of books along with their corresponding authors' names. For a real bookstore's database, both of these tables would probably have more columns. There would also be several more tables. For the examples that follow, these two tables as they are will be enough.
Before moving on to the next step of adding data to the tables, let me point out a few minor items that I've omitted mentioning. SQL statements end with a semi-colon (or a \G). You can spread an SQL statement over multiple lines. However, it won't be passed to the server by the client until you terminate it with a semi-colon and hit [Enter]. To cancel an SQL statement once you've started typing it, enter \c
and press [Enter].
As a basic convention, reserved words are printed in all capital letters. This isn't necessary, though. MariaDB is case-insensitive with regards to reserved words. Database and table names, however, are case-sensitive on Linux. This is because they reference the related directories and files on the filesystem. Column names aren't case sensitive since they're not affected by the filesystem, per se. As another convention, we use lower-case letters for structural names (e.g., table names). It's a matter of preference for deciding on names.
The primary method for entering data into a table is to use the INSERT statement. As an example, let's enter some information about an author into the authors table. We'll do that like so:
INSERT INTO authors
(name_last, name_first, country)
VALUES('Kafka', 'Franz', 'Czech Republic');
This will add the name and country of the author Franz Kafka to the authors table. We don't need to give a value for the author_id since that column was created with the AUTO_INCREMENT option. MariaDB will automatically assign an identification number. You can manually assign one, especially if you want to start the count at a higher number than 1 (e.g., 1000). Since we are not providing data for all of the columns in the table, we have to list the columns for which we are giving data and in the order that the data is given in the set following the VALUES keyword. This means that we could give the data in a different order.
For an actual database, we would probably enter data for many authors. We'll assume that we've done that and move on to entering data for some books. Below is an entry for one of Kafka's books:
INSERT INTO books
(title, author_id, isbn, year_pub)
VALUES('The Castle', '1', '0805211063', '1998');
This adds a record for Kafka's book, The Castle. Notice that we mixed up the order of the columns, but it still works because both sets agree. We indicate that the author is Kafka by giving a value of 1 for the author_id. This is the value that was assigned by MariaDB when we entered the row for Kafka earlier. Let's enter a few more books for Kafka, but by a different method:
INSERT INTO books
(title, author_id, isbn, year_pub)
VALUES('The Trial', '1', '0805210407', '1995'),
('The Metamorphosis', '1', '0553213695', '1995'),
('America', '1', '0805210644', '1995');
In this example, we've added three books in one statement. This allows us to give the list of column names once. We also give the keyword VALUES
only once, followed by a separate set of values for each book, each contained in parentheses and separated by commas. This cuts down on typing and speeds up the process. Either method is fine and both have their advantages. To be able to continue with our examples, let's assume that data on thousands of books has been entered. With that behind us, let's look at how to retrieve data from tables.
The primary method of retrieving data from tables is to use a SELECT statement. There are many options available with the SELECT statement, but you can start simply. As an example, let's retrieve a list of book titles from the books table:
SELECT title
FROM books;
This will display all of the rows of books in the table. If the table has thousands of rows, MariaDB will display thousands. To limit the number of rows retrieved, we could add a LIMIT clause to the SELECT statement like so:
SELECT title
FROM books
LIMIT 5;
This will limit the number of rows displayed to five. To be able to list the author's name for each book along with the title, you will have to join the books table with the authors table. To do this, we can use the JOIN clause like so:
SELECT title, name_last
FROM books
JOIN authors USING (author_id);
Notice that the primary table from which we're drawing data is given in the FROM
clause. The table to which we're joining is given in the JOIN clause along with the commonly named column (i.e., author_id) that we're using for the join.
To retrieve the titles of only books written by Kafka based on his name (not the author_id), we would use the WHERE
clause with the SELECT statement. This would be entered like the following:
SELECT title AS 'Kafka Books'
FROM books
JOIN authors USING (author_id)
WHERE name_last = 'Kafka';
+-------------------+
| Kafka Books |
+-------------------+
| The Castle |
| The Trial |
| The Metamorphosis |
| America |
+-------------------+
This statement will list the titles of Kafka books stored in the database. Notice that I've added the AS
parameter next to the column name title to change the column heading in the results set to Kafka Books. This is known as an alias. Looking at the results here, we can see that the title for one of Kafka's books is incorrect. His book Amerika is spelled above with a "c" in the table instead of a "k". This leads to the next section on changing data.
In order to change existing data, a common method is to use the UPDATE statement. When changing data, though, we need to be sure that we change the correct rows. In our example, there could be another book with the title America written by a different author. Since the key column isbn has only unique numbers and we know the ISBN number for the book that we want to change, we can use it to specify the row.
UPDATE books
SET title = 'Amerika'
WHERE isbn = '0805210644';
This will change the value of the title column for the row specified. We could change the value of other columns for the same row by giving the column = value for each, separated by commas.
If we want to delete a row of data, we can use the DELETE statement. For instance, suppose that our fictitious bookstore has decided no longer to carry books by John Grisham. By first running a SELECT statement, we determine the identification number for the author to be 2034. Using this author identification number, we could enter the following:
DELETE FROM books
WHERE author_id = '2034';
This statement will delete all rows from the table books for the author_id given. To do a clean job of it, we'll have to do the same for the authors table. We would just replace the table name in the statement above; everything else would be the same.
This is a very basic primer for using MariaDB. Hopefully, it gives you the idea of how to get started with MariaDB. Each of the SQL statements mentioned here have several more options and clauses each. We will cover these statements and others in greater detail in other articles. For now, though, you can learn more about them from experimenting and by further reading of the documentation online documentation.
This page is licensed: CC BY-SA / Gnu FDL
When using mariadb-backup, you have the option of performing a full or an incremental backup. Full backups create a complete backup of the database server in an empty directory while incremental backups update a previous backup with whatever changes to the data have occurred since the backup. This page documents how to perform full backups.
In order to back up the database, you need to run mariadb-backup with the --backup option to tell it to perform a backup and with the --target-dir option to tell it where to place the backup files. When taking a full backup, the target directory must be empty or it must not exist.
To take a backup, run the following command:
$ mariadb-backup --backup \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword
The time the backup takes depends on the size of the databases or tables you're backing up. You can cancel the backup if you need to, as the backup process does not modify the database.
Mariadb-backup writes the backup files the target directory. If the target directory doesn't exist, then it creates it. If the target directory exists and contains files, then it raises an error and aborts.
Here is an example backup directory:
$ ls /var/mariadb/backup/
aria_log.0000001 mysql xtrabackup_checkpoints
aria_log_control performance_schema xtrabackup_info
backup-my.cnf test xtrabackup_logfile
ibdata1 xtrabackup_binlog_info
The data files that mariadb-backup creates in the target directory are not point-in-time consistent, given that the data files are copied at different times during the backup operation. If you try to restore from these files, InnoDB notices the inconsistencies and crashes to protect you from corruption
Before you can restore from a backup, you first need to prepare it to make the data files consistent. You can do so with the --prepare option.
$ mariadb-backup --prepare \
--target-dir=/var/mariadb/backup/
Run mariadb-backup --backup. You must use a version of mariadb-backup that is compatible with the server version you are planning to upgrade from. For instance, when upgrading from MariaDB 10.4 to 10.5, you must use the 10.4 version of mariadb-backup, Another example: When upgrading from MariaDB 10.6 to 10.11, you must use the 10.6 version of mariadb-backup.
Run mariadb-backup --prepare, again using a compatible version of mariadb-backup, as described in the previous step.
Once the backup is complete and you have prepared the backup for restoration (previous step), you can restore the backup using either the --copy-back or the --move-back options. The --copy-back option allows you to keep the original backup files. The --move-back option actually moves the backup files to the datadir, so the original backup files are lost.
First, stop the MariaDB Server process.
Then, ensure that the datadir is empty.
Then, run mariadb-backup with one of the options mentioned above:
$ mariadb-backup --copy-back \
--target-dir=/var/mariadb/backup/
Then, you may need to fix the file permissions.
When mariadb-backup restores a database, it preserves the file and directory privileges of the backup. However, it writes the files to disk as the user and group restoring the database. As such, after restoring a backup, you may need to adjust the owner of the data directory to match the user and group for the MariaDB Server, typically mysql
for both. For example, to recursively change ownership of the files to the mysql
user and group, you could execute:
$ chown -R mysql:mysql /var/lib/mysql/
Finally, start the MariaDB Server process.
Once a full backup is prepared, it is a fully functional MariaDB data directory. Therefore, as long as the MariaDB Server process is stopped on the target server, you can technically restore the backup using any file copying tool, such as cp
or rysnc
. For example, you could also execute the following to restore the backup:
$ rsync -avrP /var/mariadb/backup /var/lib/mysql/
$ chown -R mysql:mysql /var/lib/mysql/
This page is licensed: CC BY-SA / Gnu FDL
When using mariadb-backup, you don't necessarily need to restore every table and/or partition that was backed up. Even if you're starting from a full backup, it is certainly possible to restore only certain tables and/or partitions from the backup, as long as the table or partition involved is in an InnoDB file-per-table tablespace. This page documents how to restore individual tables and partitions.
Before you can restore from a backup, you first need to prepare it to make the data files consistent. You can do so with the --prepare command option.
The ability to restore individual tables and partitions relies on InnoDB's transportable tablespaces. For MariaDB to import tablespaces like these, InnoDB looks for a file with a .cfg
extension. For mariadb-backup to create these files, you also need to add the --export option during the prepare step.
For example, you might execute the following command:
$ mariadb-backup --prepare --export \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword
If this operation completes without error, then the backup is ready to be restored.
Note
mariadb-backup did not support the --export option to begin with. See MDEV-13466 about that. In earlier versions of MariaDB, this means that mariadb-backup could not create .cfg
files for InnoDB file-per-table tablespaces during the --prepare
stage. You can still import file-per-table tablespaces without the .cfg
files in many cases, so it may still be possible in those versions to restore partial backups or to restore individual tables and partitions with just the .ibd
files. If you have a full backup and you need to create .cfg
files for InnoDB file-per-table tablespaces, then you can do so by preparing the backup as usual without the --export
option, and then restoring the backup, and then starting the server. At that point, you can use the server's built-in features to copy the transportable tablespaces.
The restore process for restoring individual tables and/or partitions is quite different than the process for full backups.
Rather than using the --copy-back or the --move-back, each individual InnoDB file-per-table tablespace file will have to be manually imported into the target server. The process that is used to restore the backup will depend on whether partitioning is involved.
To restore individual non-partitioned tables from a backup, find the .ibd
and .cfg
files for the table in the backup, and then import them using the Importing Transportable Tablespaces for Non-partitioned Tables process.
To restore individual partitions or partitioned tables from a backup, find the .ibd
and .cfg
files for the partition(s) in the backup, and then import them using the Importing Transportable Tablespaces for Partitioned Tables process.
This page is licensed: CC BY-SA / Gnu FDL
Replication can be used to support the backup strategy.
Replication alone is not sufficient for backup. It assists in protecting against hardware failure on the primary server, but does not protect against data loss. An accidental or malicious DROP DATABASE
or TRUNCATE TABLE
statement will be replicated onto the replica as well. Care needs to be taken to prevent data getting out of sync between the primary and the replica.
Replication is most commonly used to support backups as follows:
A primary server replicates to a replica
Backups are then run off the replica without any impact on the primary.
Backups can have a significant effect on a server, and a high-availability primary may not be able to be stopped, locked or simply handle the extra load of a backup. Running the backup from a replica has the advantage of being able to shutdown or lock the replica and perform a backup without any impact on the primary server.
Note that when backing up off a replica server, it is important to ensure that the servers keep the data in sync. See for example Replication and Foreign Keys for a situation when identical statements can result in different data on a replica and a primary.
This page is licensed: CC BY-SA / Gnu FDL
mariadb-backup
backs up the files listed below.
mariadb-backup backs up the following InnoDB data files:
mariadb-backup
will back up tables that use the MyRocks storage engine. This data is located in the directory defined by the rocksdb_datadir system variable. mariadb-backup
backs this data up by performing a checkpoint using the rocksdb_create_checkpoint system variable.
mariadb-backup
will back up tables that use the MyRocks storage engine.
mariadb-backup
also backs up files with the following extensions:
frm
isl
MYD
MYI
MAD
MAI
MRG
TRG
TRN
ARM
ARZ
CSM
CSV
opt
par
mariadb-backup
does not back up the files listed below.
This page is licensed: CC BY-SA / Gnu FDL
The following limitations apply to partitioning in MariaDB:
Each table can contain a maximum of 8192 partitions. Until , the limit was 1024.
Queries are never parallelized, even when they involve multiple partitions.
A table can only be partitioned if the storage engine supports partitioning.
All partitions must use the same storage engine. For a workaround, see Using CONNECT - Partitioning and Sharding.
A partitioned table cannot contain, or be referenced by, foreign keys.
The query cache is not aware of partitioning and partition pruning. Modifying a partition will invalidate the entries related to the whole table.
Updates can run more slowly when binlog_format=ROW and a partitioned table is updated than an equivalent update of a non-partitioned table.
All columns used in the partitioning expression for a partitioned table must be part of every unique key that the table may have.
In versions prior to , it is not possible to create partitions on tables that contain GEOMETRY types.
INFORMATION_SCHEMA.PARTITIONS contains information about existing partitions.
Partition Maintenance for suggestions on using partitions
This page is licensed: CC BY-SA / Gnu FDL
A partitioned table is stored in multiple files. By default, these files are stored in the MariaDB (or InnoDB) data directory. It is possible to keep them in different paths by specifying DATA_DIRECTORY and INDEX_DIRECTORY table options. This is useful to store different partitions on different devices.
Note that, if the innodb_file_per_table server system variable is set to 0 at the time of the table creation, all partitions will be stored in the system tablespace.
The following files exist for each partitioned tables:
File name
Notes
table_name.frm
Contains the table definition. Non-partitioned tables have this file, too.
table_name.par
Contains the partitions definitions.
table_name#P#partition_name.ext
Normal files created by the storage engine use this pattern for names. The extension depends on the storage engine.
For example, an InnoDB table with 4 partitions will have the following files:
orders.frm
orders.par
orders#P#p0.ibd
orders#P#p1.ibd
orders#P#p2.ibd
orders#P#p3.ibd
If we convert the table to MyISAM, we will have these files:
orders.frm
orders.par
orders#P#p0.MYD
orders#P#p0.MYI
orders#P#p1.MYD
orders#P#p1.MYI
orders#P#p2.MYD
orders#P#p2.MYI
orders#P#p3.MYD
orders#P#p3.MYI
This page is licensed: CC BY-SA / Gnu FDL
LIST partitioning is conceptually similar to RANGE partitioning. In both cases you decide a partitioning expression (a column, or a slightly more complex calculation) and use it to determine which partitions will contain each row. However, with the RANGE type, partitioning is done by assigning a range of values to each partition. With the LIST type, we assign a set of values to each partition. This is usually preferred if the partitioning expression can return a limited set of values.
A variant of this partitioning method, LIST COLUMNS, allows us to use multiple columns and more datatypes.
The last part of a CREATE TABLE statement can be the definition of the new table's partitions. In the case of LIST partitioning, the syntax is the following:
PARTITION BY LIST (partitioning_expression)
(
PARTITION partition_name VALUES IN (value_list),
[ PARTITION partition_name VALUES IN (value_list), ... ]
[ PARTITION partition_name DEFAULT ]
)
PARTITION BY LIST indicates that the partitioning type is LIST.
The partitioning_expression
is an SQL expression that returns a value from each row. In the simplest cases, it is a column name. This value is used to determine which partition should contain a row.
partition_name
is the name of a partition.
value_list
is a list of values. If partitioning_expression
returns one of these values, the row will be stored in this partition. If we try to insert something that does not belong to any of these value lists, the row will be rejected with an error.
The DEFAULT
partition catches all records which do not fit into other partitions.
LIST partitioning can be useful when we have a column that can only contain a limited set of values. Even in that case, RANGE partitioning could be used instead; but LIST partitioning allows us to equally distribute the rows by assigning a proper set of values to each partition.
CREATE OR REPLACE TABLE t1 (
num TINYINT(1) NOT NULL
)
ENGINE = InnoDB
PARTITION BY LIST (num) (
PARTITION p0 VALUES IN (0,1),
PARTITION p1 VALUES IN (2,3),
PARTITION p2 DEFAULT
);
This page is licensed: CC BY-SA / Gnu FDL
LINEAR PARTITION BY KEY ([column_names])
[PARTITIONS (number_of_partitions)]
LINEAR KEY partitioning is a form of partitioning, similar to KEY partitioning.
LINEAR KEY partitioning makes use of a powers-of-two algorithm, while KEY partitioning uses modulo arithmetic, to determine the partition number.
Adding, dropping, merging and splitting partitions is much faster than with the KEY partitioning type, however, data is less likely to be evenly distributed over the partitions.
CREATE OR REPLACE TABLE t1 (v1 INT)
PARTITION BY LINEAR KEY (v1)
PARTITIONS 2;
This page is licensed: CC BY-SA / Gnu FDL
This page is intended to be a quick reference of commonly-used and/or useful queries in MariaDB.
See for more.
See for more.
The AUTO_INCREMENT attribute is used to automatically generate a unique identity for new rows.
When inserting, the id field can be omitted, and is automatically created.
See for more.
This kind of query is called a join - see for more.
See the for more, as well as below for a more practical example.
See the for more.
See the for more.
See the for more.
See for more.
In this example, we want to find the lowest test score for any student.
This example returns the best test results of each student:
The function can be used to calculate someone's age:
See for more.
This example sets a with the average test score, and then uses it in a later query to return all results above the average.
User-defined variables can also be used to add an incremental counter to a resultset:
See for more.
Returns a list of all tables in the database, ordered by size:
This example assumes there's a unique ID, but that all other fields are identical. In the example below, there are 4 records, 3 of which are duplicates, so two of the three duplicates need to be removed. The intermediate SELECT is not necessary, but demonstrates what is being returned.
CC BY-SA / Gnu FDL
When a WHERE clause is related to the partitioning expression, the optimizer knows which partitions are relevant for the query. Other partitions will not be read. This optimization is called partition pruning.
can be used to know which partitions will be read for a given query. A column called partitions
will contain a comma-separated list of the accessed partitions. For example:
Sometimes the WHERE clause does not contain the necessary information to use partition pruning, or the optimizer cannot infer this information. However, we may know which partitions are relevant for the query. Since , we can force MariaDB to only access the specified partitions by adding a PARTITION clause. This feature is called partition selection. For example:
The PARTITION clause is supported for all DML statements:
In general, partition pruning is applied to statements contained in .
However, note that if a BEFORE INSERT
or BEFORE UPDATE
trigger is defined on a table, MariaDB doesn't know in advance if the columns used in the partitioning expression will be changed. For this reason, it is forced to lock all partitions.
This page is licensed: CC BY-SA / Gnu FDL
When using mariadb-backup, you have the option of performing partial backups. Partial backups allow you to choose which databases or tables to backup, as long as the table or partition involved is in an InnoDB file-per-table tablespace.This page documents how to perform partial backups.
Just like with full backups, in order to back up the database, you need to run mariadb-backup with the --backup option to tell it to perform a backup and with the --target-dir option to tell it where to place the backup files. The target directory must be empty or not exist.
For a partial backup, there are a few other arguments that you can provide as well:
To tell it which databases to backup, you can provide the --databases option.
To tell it which databases to exclude from the backup, you can provide the --databases-exclude option.
To tell it to check a file for the databases to backup, you can provide the --databases-file option.
To tell it which tables to backup, you can use the --tables option.
To tell it which tables to exclude from the backup, you can provide the --tables-exclude option.
To tell it to check a file for specific tables to backup, you can provide the --tables-file option.
The non-file partial backup options support regex in the database and table names.
For example, to take a backup of any database that starts with the string app1_
and any table in those databases that start with the string tab_
, run the following command:
mariadb-backup cannot currently backup a subset of partitions from a partitioned table. Backing up a partitioned table is currently an all-or-nothing selection. See about that. If you need to backup a subset of partitions, then one possibility is that instead of using mariadb-backup, you can export the file-per-table tablespaces of the partitions.
The time the backup takes depends on the size of the databases or tables you're backing up. You can cancel the backup if you need to, as the backup process does not modify the database.
mariadb-backup writes the backup files to the target directory. If the target directory doesn't exist, then it creates it. If the target directory exists and contains files, then it raises an error and aborts.
Just like with full backups, the data files that mariadb-backup creates in the target directory are not point-in-time consistent, given that the data files are copied at different times during the backup operation. If you try to restore from these files, InnoDB notices the inconsistencies and crashes to protect you from corruption. In fact, for partial backups, the backup is not even a completely functional MariaDB data directory, so InnoDB would raise more errors than it would for full backups. This point will also be very important to keep in mind during the restore process.
Before you can restore from a backup, you first need to prepare it to make the data files consistent. You can do so with the --prepare command option.
Partial backups rely on InnoDB's transportable tablespaces. For MariaDB to import tablespaces like these, InnoDB looks for a file with a .cfg
extension. For mariadb-backup to create these files, you also need to add the --export option during the prepare step.
For example, you might execute the following command:
If this operation completes without error, then the backup is ready to be restored.
mariadb-backup did not support the --export option. See about that. This means that mariadb-backup could not create .cfg
files for InnoDB file-per-table tablespaces during the --prepare
stage. You can still import file-per-table tablespaces without the .cfg
files in many cases, so it may still be possible in those versions to restore partial backups or to restore individual tables and partitions with just the .ibd
files. If you have a full backup and you need to create .cfg
files for InnoDB file-per-table tablespaces, then you can do so by preparing the backup as usual without the --export
option, and then restoring the backup, and then starting the server. At that point, you can use the server's built-in features to copy the transportable tablespaces.
The restore process for partial backups is quite different than the process for full backups. A partial backup is not a completely functional data directory. The data dictionary in the InnoDB system tablespace will still contain entries for the databases and tables that were not included in the backup.
Rather than using the --copy-back or the --move-back, each individual InnoDB file-per-table tablespace file will have to be manually imported into the target server. The process that is used to import the file will depend on whether partitioning is involved.
To restore individual non-partitioned tables from a backup, find the .ibd
and .cfg
files for the table in the backup, and then import them using the Importing Transportable Tablespaces for Non-partitioned Tables process.
To restore individual partitions or partitioned tables from a backup, find the .ibd
and .cfg
files for the partition(s) in the backup, and then import them using the Importing Transportable Tablespaces for Partitioned Tables process.
This page is licensed: CC BY-SA / Gnu FDL
mariadb-backup supports streaming to stdout with the --stream=xbstream
option. This option allows easy integration with popular encryption and compression tools. Below are several examples.
The following example creates an AES-encrypted backup, protected with the password "mypass" and stores it in a file "backup.xb.enc":
To decrypt and unpack this backup into the current directory, the following command can be used:
This example compresses the backup without encrypting:
We can decompress and unpack the backup as follows:
This example adds a compression step before the encryption, otherwise looks almost identical to the previous example:
We can decrypt, decompress and unpack the backup as follow (note gzip -d
in the pipeline):
7zip archiver is a popular utility (especially on Windows) that supports reading from standard output, with the --si
option, and writing to stdout with the -so
option, and can thus be used together with mariadb-backup.
Compressing backup with the 7z command line utility works as follows:
Uncompress and unpack the archive with
7z also has builtin AES-256 encryption. To encrypt the backup from the previous example using password SECRET, add -pSECRET
to the 7z command line.
Compress
Decompress , unpack
Encryption
Decrypt, unpack
Most of the described tools also provide a way to enter a passphrase interactively (although 7zip does not seem to work well when reading input from stdin). Please consult documentation of the tools for more info.
By default files like xtrabackup_checkpoints are also written to the output stream only, and so would not be available for taking further incremental backups without prior extraction from the compressed or encrypted stream output file.
To avoid this these files can additionally be written to a directory that can then be used as input for further incremental backups using the --extra-lsndir=... option.
See also e.g: Combining incremental backups with streaming output
This page is licensed: CC BY-SA / Gnu FDL
This statement can be used to change the characteristics of a . More than one change may be specified in an ALTER PROCEDURE
statement. However, you cannot change the parameters or body of a
stored procedure using this statement. To make such changes, you must
drop and re-create the procedure using either (since ) or and ( and before).
You must have the ALTER ROUTINE
privilege for the procedure. By default, that privilege is granted automatically to the procedure creator. See .
This page is licensed: GPLv2, originally from
RANGE COLUMNS and LIST COLUMNS are variants of, respectively, and . With these partitioning types there is not a single partitioning expression; instead, a list of one or more columns is accepted. The following rules apply:
The list can contain one or more columns.
Columns can be of any , , , and types.
Only bare columns are permitted; no expressions.
All the specified columns are compared to the specified values to determine which partition should contain a specific row. See below for details.
The last part of a statement can be definition of the new table's partitions. In the case of RANGE COLUMNS partitioning, the syntax is the following:
The syntax for LIST COLUMNS is the following:
partition_name
is the name of a partition.
To determine which partition should contain a row, all specified columns will be compared to each partition definition.
With LIST COLUMNS, a row matches a partition if all row values are identical to the specified values. At most one partition can match the row.
With RANGE COLUMNS, a row matches a partition if all row values are less than the specified values. The first partition that matches the row values will be used.
The DEFAULT
partition catches all records which do not fit in other partitions. Only one DEFAULT
partition is allowed.
RANGE COLUMNS partition:
LIST COLUMNS partition:
This page is licensed: CC BY-SA / Gnu FDL
Locks are acquired by a transaction to prevent concurrent transactions from modifying, or even reading, some rows or ranges of rows. This is done to make sure that concurrent write operations never collide.
supports a number of lock modes.
The two standard row-level locks are share locks(S) and exclusive locks(X).
A shared lock is obtained to read a row, and allows other transactions to read the locked row, but not to write to the locked row. Other transactions may also acquire their own shared locks.
An exclusive lock is obtained to write to a row, and stops other transactions from locking the same row. It's specific behavior depends on the ; the default (REPEATABLE READ), allows other transactions to read from the exclusively locked row.
InnoDB also permits table locking, and to allow locking at both table and row level to co-exist gracefully, a series of locks called intention locks exist.
An intention shared lock(IS) indicates that a transaction intends to set a shared lock.
An intention exclusive lock(IX) indicates that a transaction intends to set an exclusive lock.
Whether a lock is granted or not can be summarised as follows:
An X lock is not granted if any other lock (X, S, IX, IS) is held.
An S lock is not granted if an X or IX lock is held. It is granted if an S or IS lock is held.
An IX lock is not granted if in X or S lock is held. It is granted if an IX or IS lock is held.
An IS lock is not granted if an X lock is held. It is granted if an S, IX or IS lock is held.
Locks are also required for auto-increments - see .
With the default , REPEATABLE READ
, and, until , the default setting of the variable, a method called gap locking is used. When InnoDB sets a shared or exclusive lock on a record, it's actually on the index record. Records will have an internal InnoDB index even if they don't have a unique index defined. At the same time, a lock is held on the gap before the index record, so that another transaction cannot insert a new index record in the gap between the record and the preceding record.
The gap can be a single index value, multiple index values, or not exist at all depending on the contents of the index.
If a statement uses all the columns of a unique index to search for unique row, gap locking is not used.
Similar to the shared and exclusive intention locks described above, there can be a number of types of gap locks. These include the shared gap lock, exclusive gap lock, intention shared gap lock and intention exclusive gap lock.
Gap locks are disabled if the system variable is set (until ), or the is set to READ COMMITTED
.
This page is licensed: CC BY-SA / Gnu FDL
The following restrictions apply to .
All of the restrictions listed in .
Any statements that return a result set are not permitted. For example, a regular is not permitted, but a is. A cursor and statement is permitted.
statements are not permitted.
Statements that perform explicit or implicit commits or rollbacks are not permitted
Cannot be used recursively.
Cannot make changes to a table that is already in use (reading or writing) by the statement invoking the stored function.
Cannot refer to a temporary table multiple times under different aliases, even in different statements.
ROLLBACK TO SAVEPOINT and RELEASE SAVEPOINT statement which are in a stored function cannot refer to a savepoint which has been defined out of the current function.
Prepared statements (, , ) cannot be used, and therefore nor can statements be constructed as strings and then executed.
This page is licensed: CC BY-SA / Gnu FDL
A partitioning type determines how a partitioned table's rows are distributed across partitions. Some partition types require the user to specify a partitioning expression that determines in which partition a row will be stored.
The size of individual partitions depends on the partitioning type. Read and write performance are affected by the partitioning expression. Therefore, these choices should be made carefully.
MariaDB supports the following partitioning types:
This page is licensed: CC BY-SA / Gnu FDL
LINEAR HASH partitioning is a form of , similar to , in which the server takes care of the partition in which to place the data, ensuring a relatively even distribution among the partitions.
LINEAR HASH partitioning makes use of a powers-of-two algorithm, while HASH partitioning uses the modulus of the hashing function's value. Adding, dropping, merging and splitting partitions is much faster than with the , however, data is less likely to be evenly distributed over the partitions.
This page is licensed: CC BY-SA / Gnu FDL
The strategy applied when implementing data backups depends on business needs.
Data backup strategy depends on business needs. Business needs can be evaluated by performing a data inventory, determining data recovery objectives, considering the replication environment, and considering encryption requirements. Also critical is a backup storage strategy and testing backup and recovery procedures.
Backup strategy requirements flow from the understanding you build by performing a data inventory. A data inventory is established by asking questions such as:
What data is housed in the databases?
What business purpose does this data serve?
How long does the data needed to be retained in order to meet this business purpose?
Are there any legal or regulatory requirements, which would limit the length of data retention?
Data recovery requirements are often defined in terms of Recovery Point Objective (RPO) and Recovery Time Objective (RTO). RTO and RPO are considered in the context of the data identified in the data inventory.
Recovery Point Objective (RPO) defines the maximum amount of data a business is willing to lose. For example, a business can define a RPO of 24 hours.
Recovery Time Objective (RTO) defines how quickly a business needs to restore service in the event of a fault. For example, a business can define a RTO of 8 hours.
Backup strategy plays a substantial role in achieving RPO and RTO.
RPO depends on completion of backups, which provide a viable recovery point. Since RPO is measured at backup completion, not backup initiation, backup jobs must be scheduled at an interval smaller than the RPO.
Techniques for achieving RPO include:
Frequent incremental backups and less frequent full backups.
Performing backups in conjunction with replication and clustering to eliminate impact on production workloads, allowing a higher backup frequency.
Automated monitoring of backup status.
Automated testing of backups.
The RTO window typically commences at the point when a decision is made by the business to recover from backups, not at the start of an incident.
Techniques for achieving RTO include:
Leveraging information produced during incident response, which can reduce the set of data to restore from backups, or identify specific data validation requirements dependent on the nature of the incident.
Having fast access to backup data. Performance requirements of backup infrastructure should be understood for both backup and restoration workloads.
Using delayed replication, either within the same data center or to a different data center, can provide shorter path to recovery. This is particularly true when coupled with robust application monitoring, which allows intervention before the window of delay elapses.
Applying written and tested recovery procedures, which designate the systems and commands to be used during recovery.
Performing drills and exercises that periodically test recovery procedures to confirm readiness.
MariaDB Enterprise Server supports several implementations of replication, which accurately duplicates data from one Server to one or more other Servers. The use of a dedicated replica as a source for backups can minimize workload impact.
MariaDB Enterprise Cluster implements virtually synchronous replication, where each Server instance contains a replica of all of the data for the Cluster. Backups can be performed from any node in the Cluster.
MariaDB Enterprise Server supports encryption on disk (data-at-rest encryption) and on the network (data-in-transit encryption).
MariaDB Enterprise Backup copies tablespaces from disk. When data-at-rest encryption is enabled, backups contain encrypted data.
MariaDB Enterprise Backup supports TLS encryption for communications with MariaDB Enterprise Server. To enable TLS encryption, set TLS options from the command-line or in the configuration file:
mariadb-backup --backup \
--target-dir=/data/backups/full \
--user=mariadb-backup \
--password=mbu_passwd \
--ssl-ca=/etc/my.cnf.d/certs/ca.pem \
--ssl-cert=/etc/my.cnf.d/certs/client-cert.pem \
--ssl-key=/etc/my.cnf.d/certs/client-key.pem
How backups are stored can impact backup viability. Backup storage also presents separate risks. These risks need to be carefully considered:
Backup data should always be stored separately from the system being backed up, and separate from the system used for recovery.
Backup data should be subject to equal or more controls than data in production databases. For example, backup data should generally be encrypted even where a decision has bee made that a production database will not use data-at-rest encryption.
Business requirements may define a need for offsite storage of backups as a means of guaranteeing delivery on RPO. In these cases you should also consider onsite storage of backups as a means of guaranteeing delivery on RTO.
Retention requirements and the run-rate of new data production can aid in capacity planning.
Testing has been identified as a critical success factor for the successful operation of data systems.
Backups should be tested. Recovery using backups and recovery procedures should be tested.
This page is: Copyright © 2025 MariaDB. All rights reserved.
PARTITION BY KEY ([column_names])
[PARTITIONS (number_of_partitions)]
Partitioning by key is a type of partitioning that is similar to and can be used in a similar way as partitioning by hash.
KEY takes an optional list of column_names, and the hashing function is given by the server.
Just like HASH partitioning, in KEY partitioning the server takes care of the partition and ensures an even distribution among the partitions. However, the largest difference is that KEY partitioning makes use of column_names, and cannot accept a partitioning_expression which is based on column_names, in contrast to HASH partitioning, which can.
If no column_names are specified, the table's primary key is used if present, or not null unique key if no primary key is present. If neither of these keys are present, not specifying any column_names will result in ERROR 1488 (HY000): Field in list of fields for partition function not found in table
Unlike other partitioning types, columns used for partitioning by KEY are not limited to integer or NULL values.
KEY partitions do not support column index prefixes. Any columns in the partitioning key that make use of column prefixes are not used (see also MDEV-32727).
CREATE OR REPLACE TABLE t1 (v1 INT)
PARTITION BY KEY (v1)
PARTITIONS 2;
CREATE OR REPLACE TABLE t1 (v1 INT, v2 INT)
PARTITION BY KEY (v1,v2)
PARTITIONS 2;
CREATE OR REPLACE TABLE t1 (
id INT NOT NULL PRIMARY KEY,
name VARCHAR(5)
)
PARTITION BY KEY()
PARTITIONS 2;
CREATE OR REPLACE TABLE t1 (
id INT NOT NULL UNIQUE KEY,
name VARCHAR(5)
)
PARTITION BY KEY()
PARTITIONS 2;
The unique key must be NOT NULL:
CREATE OR REPLACE TABLE t1 (
id INT NULL UNIQUE KEY,
name VARCHAR(5)
)
PARTITION BY KEY()
PARTITIONS 2;
ERROR 1488 (HY000): Field in list of fields for partition function not found in table
KEY requires column_values if no primary key or not null unique key is present:
CREATE OR REPLACE TABLE t1 (
id INT NULL UNIQUE KEY,
name VARCHAR(5)
)
PARTITION BY KEY()
PARTITIONS 2;
ERROR 1488 (HY000): Field in list of fields for partition function not found in table
CREATE OR REPLACE TABLE t1 (
id INT NULL UNIQUE KEY,
name VARCHAR(5)
)
PARTITION BY KEY(name)
PARTITIONS 2;
Primary key columns with index prefixes are silently ignored, so the following two queries are equivalent:
CREATE OR REPLACE TABLE t1 (
a VARCHAR(10),
b VARCHAR(10),
c VARCHAR(10),
PRIMARY KEY (a(5), b, c(5))
) PARTITION BY KEY() PARTITIONS 2;
CREATE OR REPLACE TABLE t1 (
a VARCHAR(10),
b VARCHAR(10),
c VARCHAR(10),
PRIMARY KEY (b)
) PARTITION BY KEY() PARTITIONS 2;
a(5)
and c(5)
are silently ignored in the former.
If all columns use index prefixes, the statement fails with a slightly misleading error:
CREATE OR REPLACE TABLE t1 (
a VARCHAR(10),
b VARCHAR(10),
c VARCHAR(10),
PRIMARY KEY (a(5), b(5), c(5))
) PARTITION BY KEY() PARTITIONS 2;
ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table's partitioning function
This page is licensed: CC BY-SA / Gnu FDL
It's important to give careful thought to the privileges associated with stored functions and stored procedures. The following is an explanation of how they work.
To create a stored routine, the CREATE ROUTINE privilege is needed. The SUPER privilege is required if a DEFINER
is declared that's not the creator's account (see DEFINER clause below). The SUPER
privilege is also required if statement-based binary logging is used. See Binary Logging of Stored Routines for more details.
To make changes to, or drop, a stored routine, the ALTER ROUTINE privilege is needed. The creator of a routine is temporarily granted this privilege if they attempt to change or drop a routine they created, unless the automatic_sp_privileges variable is set to 0
(it defaults to 1).
The SUPER
privilege is also required if statement-based binary logging is used. See Binary Logging of Stored Routines for more details.
To run a stored routine, the EXECUTE privilege is needed. This is also temporarily granted to the creator if they attempt to run their routine unless the automatic_sp_privileges variable is set to 0
.
The SQL SECURITY clause (by default DEFINER
) specifies what privileges are used when a routine is called. If SQL SECURITY
is INVOKER
, the function body will be evaluated using the privileges of the user calling the function. If SQL SECURITY
is DEFINER
, the function body is always evaluated using the privileges of the definer account. DEFINER
is the default. Thus, by default, users who can access the database associated with the stored routine can also run the routine, and potentially perform operations they wouldn't normally have permissions for.
The creator of a routine is the account that ran the CREATE FUNCTION or CREATE PROCEDURE statement, regardless of whether a DEFINER
is provided. The definer is by default the creator unless otherwise specified.
The server automatically changes the privileges in the mysql.proc table as required, but will not look out for manual changes.
If left out, the DEFINER
is treated as the account that created the stored routine or view. If the account creating the routine has the SUPER
privilege, another account can be specified as the DEFINER
.
This clause specifies the context the stored routine or view will run as. It can take two values - DEFINER
or INVOKER
. DEFINER
is the account specified as the DEFINER
when the stored routine or view was created (see the section above). INVOKER
is the account invoking the routine or view.
As an example, let's assume a routine, created by a superuser who's specified as the DEFINER
, deletes all records from a table. If SQL SECURITY=DEFINER
, anyone running the routine, regardless of whether they have delete privileges, will be able to delete the records. If SQL SECURITY = INVOKER
, the routine will only delete the records if the account invoking the routine has permission to do so.
INVOKER
is usually less risky, as a user cannot perform any operations they're normally unable to. However, it's not uncommon for accounts to have relatively limited permissions, but be specifically granted access to routines, which are then invoked in the DEFINER
context.
All privileges that are specific to a stored routine will be dropped when a DROP FUNCTION or DROP ROUTINE is run. However, if a CREATE OR REPLACE FUNCTION or CREATE OR REPLACE PROCEDURE is used to drop and replace and the routine, any privileges specific to that routine will not be dropped.
Changing the DEFINER of MySQL stored routines etc. - maria.com post on what to do after you've dropped a user, and now want to change the DEFINER on all database objects that currently have it set to this dropped user.
This page is licensed: CC BY-SA / Gnu FDL
The following SQL statements are not permitted inside any stored routines (stored functions, stored procedures, events or triggers).
ALTER VIEW; you can use CREATE OR REPLACE VIEW instead.
LOAD DATA and LOAD TABLE.
INSERT DELAYED is permitted, but the statement is handled as a regular INSERT.
LOCK TABLES and UNLOCK TABLES.
References to local variables within prepared statements inside a stored routine (use user-defined variables instead).
BEGIN (WORK) is treated as the beginning of a BEGIN END block, not a transaction, so START TRANSACTION needs to be used instead.
The number of permitted recursive calls is limited to max_sp_recursion_depth. If this variable is 0 (default), recursivity is disabled. The limit does not apply to stored functions.
Most statements that are not permitted in prepared statements are not permitted in stored programs. See Prepare Statement:Permitted statements for a list of statements that can be used. SIGNAL, RESIGNAL and GET DIAGNOSTICS are exceptions, and may be used in stored routines.
There are also further limitations specific to the kind of stored routine.
Note that, if a stored program calls another stored program, the latter will inherit the caller's limitations. So, for example, if a stored procedure is called by a stored function, that stored procedure will not be able to produce a result set, because stored functions can't do this.
This page is licensed: CC BY-SA / Gnu FDL
DROP PROCEDURE [IF EXISTS] sp_name
This statement is used to drop a stored procedure. That is, the
specified routine is removed from the server along with all privileges specific to the procedure. You must have the ALTER ROUTINE
privilege for the routine. If the automatic_sp_privileges server system variable is set, that privilege and EXECUTE
are granted automatically to the routine creator - see Stored Routine Privileges.
The IF EXISTS
clause is a MySQL/MariaDB extension. It
prevents an error from occurring if the procedure or function does not exist. ANOTE
is produced that can be viewed with SHOW WARNINGS.
While this statement takes effect immediately, threads which are executing a procedure can continue execution.
DROP PROCEDURE simpleproc;
IF EXISTS:
DROP PROCEDURE simpleproc;
ERROR 1305 (42000): PROCEDURE test.simpleproc does not exist
DROP PROCEDURE IF EXISTS simpleproc;
Query OK, 0 rows affected, 1 warning (0.00 sec)
SHOW WARNINGS;
+-------+------+------------------------------------------+
| Level | Code | Message |
+-------+------+------------------------------------------+
| Note | 1305 | PROCEDURE test.simpleproc does not exist |
+-------+------+------------------------------------------+
This page is licensed: GPLv2, originally from fill_help_tables.sql
The PARTITIONS table in the INFORMATION_SCHEMA database contains information about partitions.
The SHOW TABLE STATUS statement contains a Create_options
column, that contains the string 'partitioned' for partitioned tables.
The SHOW CREATE TABLE statement returns the CREATE TABLE statement that can be used to re-create a table, including the partitions definition.
This page is licensed: CC BY-SA / Gnu FDL
PARTITION BY RANGE COLUMNS (col1, col2, ...)
(
PARTITION partition_name VALUES LESS THAN (value1, value2, ...),
[ PARTITION partition_name VALUES LESS THAN (value1, value2, ...), ... ]
)
PARTITION BY LIST COLUMNS (partitioning_expression)
(
PARTITION partition_name VALUES IN (value1, value2, ...),
[ PARTITION partition_name VALUES IN (value1, value2, ...), ... ]
[ PARTITION partititon_name DEFAULT ]
)
CREATE OR REPLACE TABLE t1 (
date1 DATE NOT NULL,
date2 DATE NOT NULL
)
ENGINE = InnoDB
PARTITION BY RANGE COLUMNS (date1,date2) (
PARTITION p0 VALUES LESS THAN ('2013-01-01', '1994-12-01'),
PARTITION p1 VALUES LESS THAN ('2014-01-01', '1995-12-01'),
PARTITION p2 VALUES LESS THAN ('2015-01-01', '1996-12-01')
);
CREATE OR REPLACE TABLE t1 (
num TINYINT(1) NOT NULL
)
ENGINE = InnoDB
PARTITION BY LIST COLUMNS (num) (
PARTITION p0 VALUES IN (0,1),
PARTITION p1 VALUES IN (2,3),
PARTITION p2 DEFAULT
);
PARTITION BY LINEAR HASH (partitioning_expression)
[PARTITIONS(number_of_partitions)]
CREATE OR REPLACE TABLE t1 (c1 INT, c2 DATETIME)
PARTITION BY LINEAR HASH(TO_DAYS(c2))
PARTITIONS 5;
CREATE TABLE t1 ( a INT );
CREATE TABLE t2 ( b INT );
CREATE TABLE student_tests (
name CHAR(10), test CHAR(10),
score TINYINT, test_date DATE
);
INSERT INTO t1 VALUES (1), (2), (3);
INSERT INTO t2 VALUES (2), (4);
INSERT INTO student_tests
(name, test, score, test_date) VALUES
('Chun', 'SQL', 75, '2012-11-05'),
('Chun', 'Tuning', 73, '2013-06-14'),
('Esben', 'SQL', 43, '2014-02-11'),
('Esben', 'Tuning', 31, '2014-02-09'),
('Kaolin', 'SQL', 56, '2014-01-01'),
('Kaolin', 'Tuning', 88, '2013-12-29'),
('Tatiana', 'SQL', 87, '2012-04-28'),
('Tatiana', 'Tuning', 83, '2013-09-30');
CREATE TABLE student_details (
id INT NOT NULL AUTO_INCREMENT, name CHAR(10),
date_of_birth DATE, PRIMARY KEY (id)
);
INSERT INTO student_details (name,date_of_birth) VALUES
('Chun', '1993-12-31'),
('Esben','1946-01-01'),
('Kaolin','1996-07-16'),
('Tatiana', '1988-04-13');
SELECT * FROM student_details;
+----+---------+---------------+
| id | name | date_of_birth |
+----+---------+---------------+
| 1 | Chun | 1993-12-31 |
| 2 | Esben | 1946-01-01 |
| 3 | Kaolin | 1996-07-16 |
| 4 | Tatiana | 1988-04-13 |
+----+---------+---------------+
SELECT * FROM t1 INNER JOIN t2 ON t1.a = t2.b;
SELECT MAX(a) FROM t1;
+--------+
| MAX(a) |
+--------+
| 3 |
+--------+
SELECT MIN(a) FROM t1;
+--------+
| MIN(a) |
+--------+
| 1 |
+--------+
SELECT AVG(a) FROM t1;
+--------+
| AVG(a) |
+--------+
| 2.0000 |
+--------+
SELECT name, MAX(score) FROM student_tests GROUP BY name;
+---------+------------+
| name | MAX(score) |
+---------+------------+
| Chun | 75 |
| Esben | 43 |
| Kaolin | 88 |
| Tatiana | 87 |
+---------+------------+
SELECT name, test, score FROM student_tests ORDER BY score DESC;
+---------+--------+-------+
| name | test | score |
+---------+--------+-------+
| Kaolin | Tuning | 88 |
| Tatiana | SQL | 87 |
| Tatiana | Tuning | 83 |
| Chun | SQL | 75 |
| Chun | Tuning | 73 |
| Kaolin | SQL | 56 |
| Esben | SQL | 43 |
| Esben | Tuning | 31 |
+---------+--------+-------+
SELECT name,test, score FROM student_tests WHERE score=(SELECT MIN(score) FROM student);
+-------+--------+-------+
| name | test | score |
+-------+--------+-------+
| Esben | Tuning | 31 |
+-------+--------+-------+
SELECT name, test, score FROM student_tests st1 WHERE score = (
SELECT MAX(score) FROM student st2 WHERE st1.name = st2.name
);
+---------+--------+-------+
| name | test | score |
+---------+--------+-------+
| Chun | SQL | 75 |
| Esben | SQL | 43 |
| Kaolin | Tuning | 88 |
| Tatiana | SQL | 87 |
+---------+--------+-------+
SELECT CURDATE() AS today;
+------------+
| today |
+------------+
| 2014-02-17 |
+------------+
SELECT name, date_of_birth, TIMESTAMPDIFF(YEAR,date_of_birth,'2014-08-02') AS age
FROM student_details;
+---------+---------------+------+
| name | date_of_birth | age |
+---------+---------------+------+
| Chun | 1993-12-31 | 20 |
| Esben | 1946-01-01 | 68 |
| Kaolin | 1996-07-16 | 18 |
| Tatiana | 1988-04-13 | 26 |
+---------+---------------+------+
SELECT @avg_score:= AVG(score) FROM student_tests;
+-------------------------+
| @avg_score:= AVG(score) |
+-------------------------+
| 67.000000000 |
+-------------------------+
SELECT * FROM student_tests WHERE score > @avg_score;
+---------+--------+-------+------------+
| name | test | score | test_date |
+---------+--------+-------+------------+
| Chun | SQL | 75 | 2012-11-05 |
| Chun | Tuning | 73 | 2013-06-14 |
| Kaolin | Tuning | 88 | 2013-12-29 |
| Tatiana | SQL | 87 | 2012-04-28 |
| Tatiana | Tuning | 83 | 2013-09-30 |
+---------+--------+-------+------------+
SET @count = 0;
SELECT @count := @count + 1 AS counter, name, date_of_birth FROM student_details;
+---------+---------+---------------+
| counter | name | date_of_birth |
+---------+---------+---------------+
| 1 | Chun | 1993-12-31 |
| 2 | Esben | 1946-01-01 |
| 3 | Kaolin | 1996-07-16 |
| 4 | Tatiana | 1988-04-13 |
+---------+---------+---------------+
SELECT table_schema AS `DB`, table_name AS `TABLE`,
ROUND(((data_length + index_length) / 1024 / 1024), 2) `Size (MB)`
FROM information_schema.TABLES
ORDER BY (data_length + index_length) DESC;
+--------------------+---------------------------------------+-----------+
| DB | Table | Size (MB) |
+--------------------+---------------------------------------+-----------+
| wordpress | wp_simple_history_contexts | 7.05 |
| wordpress | wp_posts | 6.59 |
| wordpress | wp_simple_history | 3.05 |
| wordpress | wp_comments | 2.73 |
| wordpress | wp_commentmeta | 2.47 |
| wordpress | wp_simple_login_log | 2.03 |
...
CREATE TABLE t (id INT, f1 VARCHAR(2));
INSERT INTO t VALUES (1,'a'), (2,'a'), (3,'b'), (4,'a');
SELECT * FROM t t1, t t2 WHERE t1.f1=t2.f1 AND t1.id<>t2.id AND t1.id=(
SELECT MAX(id) FROM t tab WHERE tab.f1=t1.f1
);
+------+------+------+------+
| id | f1 | id | f1 |
+------+------+------+------+
| 4 | a | 1 | a |
| 4 | a | 2 | a |
+------+------+------+------+
DELETE FROM t WHERE id IN (
SELECT t2.id FROM t t1, t t2 WHERE t1.f1=t2.f1 AND t1.id<>t2.id AND t1.id=(
SELECT MAX(id) FROM t tab WHERE tab.f1=t1.f1
)
);
Query OK, 2 rows affected (0.120 sec)
SELECT * FROM t;
+------+------+
| id | f1 |
+------+------+
| 3 | b |
| 4 | a |
+------+------
EXPLAIN PARTITIONS SELECT * FROM orders WHERE id < 15000000;
+------+-------------+--------+------------+-------+---------------+---------+---------+------+------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+--------+------------+-------+---------------+---------+---------+------+------+-------------+
| 1 | SIMPLE | orders | p0,p1 | range | PRIMARY | PRIMARY | 4 | NULL | 2 | Using where |
+------+-------------+--------+------------+-------+---------------+---------+---------+------+------+-------------+
SELECT * FROM orders PARTITION (p3) WHERE user_id = 50;
SELECT * FROM orders PARTITION (p2,p3) WHERE user_id >= 40;
$ mariadb-backup --backup \
--target-dir=/var/mariadb/backup/ \
--databases='app1_*' --tables='tab_*' \
--user=mariadb-backup --password=mypassword
$ mariadb-backup --prepare --export \
--target-dir=/var/mariadb/backup/
mariadb-backup --user=root --backup --stream=xbstream | openssl enc -aes-256-cbc -k mypass > backup.xb.enc
openssl enc -d -aes-256-cbc -k mypass -in backup.xb.enc | mbstream -x
mariadb-backup --user=root --backup --stream=xbstream | gzip > backupstream.gz
gunzip -c backupstream.gz | mbstream -x
mariadb-backup --user=root --backup --stream=xbstream | gzip | openssl enc -aes-256-cbc -k mypass > backup.xb.gz.enc
openssl enc -d -aes-256-cbc -k mypass -in backup.xb.gz.enc |gzip -d| mbstream -x
mariadb-backup --user=root --backup --stream=xbstream | 7z a -si backup.xb.7z
7z e backup.xb.7z -so |mbstream -x
mariadb-backup --user=root --backup --stream=xbstream | zstd - -o backup.xb.zst -f -1
zstd -d backup.xbstream.zst -c | mbstream -x
mariadb-backup --user=root --backup --stream=xbstream | gpg -c --passphrase SECRET --batch --yes -o backup.xb.gpg
gpg --decrypt --passphrase SECRET --batch --yes backup.xb.gpg | mbstream -x
ALTER PROCEDURE proc_name [characteristic ...]
characteristic:
{ CONTAINS SQL | NO SQL | READS SQL DATA | MODIFIES SQL DATA }
| SQL SECURITY { DEFINER | INVOKER }
| COMMENT 'string'
ALTER PROCEDURE simpleproc SQL SECURITY INVOKER;
mariadb-backup makes it very easy to set up a replica using a full backup. This page documents how to set up a replica from a backup.
If you are using MariaDB Galera Cluster, then you may want to try one of the following pages instead:
The first step is to simply take and prepare a fresh full backup of a database server in the replication topology. If the source database server is the desired replication primary, then we do not need to add any additional options when taking the full backup. For example:
$ mariadb-backup --backup \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword
If the source database server is a replica of the desired primary, then we should add the --slave-info option, and possibly the --safe-slave-backup option. For example:
$ mariadb-backup --backup \
--slave-info --safe-slave-backup \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword
And then we would prepare the backup as you normally would. For example:
$ mariadb-backup --prepare \
--target-dir=/var/mariadb/backup/
Once the backup is done and prepared, we can copy it to the new replica. For example:
$ rsync -avP /var/mariadb/backup dbserver2:/var/mariadb/backup
At this point, we can restore the backup to the datadir, as you normally would. For example:
$ mariadb-backup --copy-back \
--target-dir=/var/mariadb/backup/
And adjusting file permissions, if necessary:
$ chown -R mysql:mysql /var/lib/mysql/
Before the new replica can begin replicating from the primary, we need to create a user account on the primary that the replica can use to connect, and we need to grant the user account the REPLICATION SLAVE privilege. For example:
CREATE USER 'repl'@'dbserver2' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'dbserver2';
Before we start the server on the new replica, we need to configure it. At the very least, we need to ensure that it has a unique server_id value. We also need to make sure other replication settings are what we want them to be, such as the various GTID system variables, if those apply in the specific environment.
Once configuration is done, we can start the MariaDB Server process on the new replica.
At this point, we need to get the replication coordinates of the primary from the original backup directory.
If we took the backup on the primary, then the coordinates will be in the xtrabackup_binlog_info file. If we took the backup on another replica and if we provided the --slave-info option, then the coordinates will be in the file xtrabackup_slave_info file.
mariadb-backup dumps replication coordinates in two forms: GTID coordinates and binary log file and position coordinates, like the ones you would normally see from SHOW MASTER STATUS output. We can choose which set of coordinates we would like to use to set up replication.
For example:
mariadb-bin.000096 568 0-1-2
Regardless of the coordinates we use, we will have to set up the primary connection using CHANGE MASTER TO and then start the replication threads with START SLAVE.
If we want to use GTIDs, then we will have to first set gtid_slave_pos to the GTID coordinates that we pulled from either the xtrabackup_binlog_info file or the xtrabackup_slave_info file in the backup directory. For example:
$ cat xtrabackup_binlog_info
mariadb-bin.000096 568 0-1-2
And then we would set MASTER_USE_GTID=slave_pos
in the CHANGE MASTER TO command. For example:
SET GLOBAL gtid_slave_pos = "0-1-2";
CHANGE MASTER TO
MASTER_HOST="dbserver1",
MASTER_PORT=3306,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_USE_GTID=slave_pos;
START SLAVE;
If we want to use the binary log file and position coordinates, then we would set MASTER_LOG_FILE
and MASTER_LOG_POS
in the CHANGE MASTER TO command to the file and position coordinates that we pulled; either the xtrabackup_binlog_info file or the xtrabackup_slave_info file in the backup directory, depending on whether the backup was taken from the primary or from a replica of the primary. For example:
CHANGE MASTER TO
MASTER_HOST="dbserver1",
MASTER_PORT=3306,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_LOG_FILE='mariadb-bin.000096',
MASTER_LOG_POS=568;
START SLAVE;
We should be done setting up the replica now, so we should check its status with SHOW SLAVE STATUS. For example:
SHOW SLAVE STATUS\G
This page is licensed: CC BY-SA / Gnu FDL
The InnoDB Monitor refers to particular kinds of monitors included in MariaDB and since the early versions of MySQL.
There are four types: the standard InnoDB monitor, the InnoDB Lock Monitor, InnoDB Tablespace Monitor and the InnoDB Table Monitor.
The standard InnoDB Monitor returns extensive InnoDB information, particularly lock, semaphore, I/O and buffer activity:
To enable the standard InnoDB Monitor, from , set the innodb_status_output system variable to 1. Before , running the following statement was the method used:
CREATE TABLE innodb_monitor (a INT) ENGINE=INNODB;
To disable the standard InnoDB monitor, either set the system variable to zero, or, before , drop the table
DROP TABLE innodb_monitor;
The CREATE TABLE and DROP TABLE method of enabling and disabling the InnoDB Monitor has been deprecated, and may be removed in a future version of MariaDB.
For a description of the output, see SHOW ENGINE INNODB STATUS.
The InnoDB Lock Monitor displays additional lock information.
To enable the InnoDB Lock Monitor, the standard InnoDB monitor must be enabled. Then, from , set the innodb_status_output_locks system variable to 1. Before , running the following statement was the method used:
CREATE TABLE innodb_lock_monitor (a INT) ENGINE=INNODB;
To disable the standard InnoDB monitor, either set the system variable to zero, or, before , drop the table
DROP TABLE innodb_lock_monitor;
The CREATE TABLE and DROP TABLE method of enabling and disabling the InnoDB Lock Monitor has been deprecated, and may be removed in a future version of MariaDB.
The InnoDB Tablespace Monitor is deprecated, and may be removed in a future version of MariaDB.
Enabling the Tablespace Monitor outputs a list of file segments in the shared tablespace to the error log, and validates the tablespace allocation data structures.
To enable the Tablespace Monitor, run the following statement:
CREATE TABLE innodb_tablespace_monitor (a INT) ENGINE=INNODB;
To disable it, drop the table:
DROP TABLE innodb_tablespace_monitor;
The InnoDB Table Monitor is deprecated, and may be removed in a future version of MariaDB.
Enabling the Table Monitor outputs the contents of the InnoDB internal data dictionary to the error log every fifteen seconds.
To enable the Table Monitor, run the following statement:
CREATE TABLE innodb_table_monitor (a INT) ENGINE=INNODB;
To disable it, drop the table:
DROP TABLE innodb_table_monitor;
The SHOW ENGINE INNODB STATUS statement can be used to obtain the standard InnoDB Monitor output when required, rather than sending it to the error log. It will also display the InnoDB Lock Monitor information if the innodb_status_output_locks system variable is set to 1
.
This page is licensed: CC BY-SA / Gnu FDL
introduced a performance improvement related to group commit that affects the performance of flushing InnoDB transactions when the binary log is enabled.
In and above, when both innodb_flush_log_at_trx_commit=1 (the default) is set and the binary log is enabled, there is now one less sync to disk inside InnoDB during commit (2 syncs shared between a group of transactions instead of 3).
Durability of commits is not decreased — this is because even if the server crashes before the commit is written to disk by InnoDB, it will be recovered from the binary log at next server startup (and it is guaranteed that sufficient information is synced to disk so that such a recovery is always possible).
The old behavior, with 3 syncs to disk per (group) commit (and consequently lower performance), can be selected with the new innodb_flush_log_at_trx_commit=3 option. There is normally no benefit to doing this, however there are a couple of edge cases to be aware of.
If innodb_flush_log_at_trx_commit=1 is set and the binary log is enabled, but sync_binlog=0 is set, then commits are not guaranteed durable inside InnoDB after commit. This is because if sync_binlog=0 is set and if the server crashes, then transactions that were not flushed to the binary log prior to the crash will be missing from the binary log.
In this specific scenario, innodb_flush_log_at_trx_commit=3 can be set to ensure that transactions will be durable in InnoDB, even if they are not necessarily durable from the perspective of the binary log.
One should be aware that if sync_binlog=0 is set, then a crash is nevertheless likely to cause transactions to be missing from the binary log. This will cause the binary log and InnoDB to be inconsistent with each other. This is also likely to cause any replication slaves to become inconsistent, since transactions are replicated through the binary log. Thus it is recommended to set sync_binlog=1. With the group commit improvements introduced in , this setting has much less penalty in recent versions compared to older versions of MariaDB and MySQL.
mariadb-backup and Percona XtraBackup only see transactions that have been flushed to the redo log. With the group commit improvements, there may be a small delay (defined by the binlog_commit_wait_usec system variable) between when a commit happens and when the commit will be included in a backup.
Note that the backup will still be fully consistent with itself and the binary log. This problem is normally not an issue in practice. A backup usually takes a long time to complete (relative to the 1 second or so that binlog_commit_wait_usec is normally set to), and a backup usually includes a lot of transactions that were committed during the backup. With this in mind, it is not generally noticeable if the backup does not include transactions that were committed during the last 1 second or so of the backup process. It is just mentioned here for completeness.
This page is licensed: CC BY-SA / Gnu FDL
When using mariadb-backup, you have the option of performing a full or incremental backup. Full backups create a complete copy in an empty directory while incremental backups update a previous backup with new data. This page documents incremental backups.
InnoDB pages contain log sequence numbers, or LSN's. Whenever you modify a row on any InnoDB table on the database, the storage engine increments this number. When performing an incremental backup, mariadb-backup checks the most recent LSN for the backup against the LSN's contained in the database. It then updates any of the backup files that have fallen behind.
In order to take an incremental backup, you first need to take a full backup. In order to back up the database, you need to run mariadb-backup with the --backup option to tell it to perform a backup and with the --target-dir option to tell it where to place the backup files. When taking a full backup, the target directory must be empty or it must not exist.
To take a backup, run the following command:
This backs up all databases into the target directory /var/mariadb/backup
. If you look in that directory at the xtrabackup_checkpoints file, you can see the LSN data provided by InnoDB.
For example:
Once you have created a full backup on your system, you can also back up the incremental changes as often as you would like.
In order to perform an incremental backup, you need to run mariadb-backup with the --backup option to tell it to perform a backup and with the --target-dir option to tell it where to place the incremental changes. The target directory must be empty. You also need to run it with the --incremental-basedir option to tell it the path to the full backup taken above. For example:
This command creates a series of delta files that store the incremental changes in /var/mariadb/inc1
. You can find a similar xtrabackup_checkpoints file in this directory, with the updated LSN values.
For example:
To perform additional incremental backups, you can then use the target directory of the previous incremental backup as the incremental base directory of the next incremental backup. For example:
--stream
outputWhen using --stream, e.g for compression or encryption using external tools, the xtrabackup_checkpoints file containing the information where to continue from on the next incremental backup will also be part of the compressed/encrypted backup file, and so not directly accessible by default.
A directory containing an extra copy of the file can be created using the --extra-lsndir=... option though, and this directory can then be passed to the next incremental backup --incremental-basedir=..., for example:
Following the above steps, you have three backups in /var/mariadb
: The first is a full backup, the others are increments on this first backup. In order to restore a backup to the database, you first need to apply the incremental backups to the base full backup. This is done using the --prepare command option. In , you would also have to use the --apply-log-only option.
In and later, perform the following process:
First, prepare the base backup:
Running this command brings the base full backup, that is, /var/mariadb/backup
, into sync with the changes contained in the InnoDB redo log collected while the backup was taken.
Then, apply the incremental changes to the base full backup:
Running this command brings the base full backup, that is, /var/mariadb/backup
, into sync with the changes contained in the first incremental backup.
For each remaining incremental backup, repeat the last step to bring the base full backup into sync with the changes contained in that incremental backup.
Once you've applied all incremental backups to the base, you can restore the backup using either the --copy-back or the --move-back options. The --copy-back option allows you to keep the original backup files. The --move-back option actually moves the backup files to the datadir, so the original backup files are lost.
First, .
Then, ensure that the datadir is empty.
Then, run mariadb-backup with one of the options mentioned above:
Then, you may need to fix the file permissions.
When mariadb-backup restores a database, it preserves the file and directory privileges of the backup. However, it writes the files to disk as the user and group restoring the database. As such, after restoring a backup, you may need to adjust the owner of the data directory to match the user and group for the MariaDB Server, typically mysql
for both. For example, to recursively change ownership of the files to the mysql
user and group, you could execute:
Finally, .
This page is licensed: CC BY-SA / Gnu FDL
In the absence of a more tutorial-level document, here is a simple example of three basic JOIN types, which you can experiment with in order to see what the different joins accomplish:
The first two SELECTs are (unfortunately) commonly written with an older form:
What you can see from this is that an INNER JOIN produces a result set containing only rows that have a match, in both tables (t1 and t2), for the specified join condition(s).
A CROSS JOIN produces a result set in which every row in each table is joined to every row in the other table; this is also called a cartesian product. In MariaDB the CROSS keyword can be omitted, as it does nothing. Any JOIN without an ON clause is a CROSS JOIN.
The LEFT JOIN is an outer join, which produces a result set with all rows from the table on the "left" (t1); the values for the columns in the other table (t2) depend on whether or not a match was found. If no match is found, all columns from that table are set to NULL for that row.
The RIGHT JOIN is similar to the LEFT JOIN, though its resultset contains all rows from the right table, and the left table's columns will be filled with NULLs when needed.
JOINs can be concatenated to read results from three or more tables.
Here is the output of the various SELECT statements listed above:
That should give you a bit more understanding of how JOINS work!
- Nice tutorial. Part 5 covers joins.
The initial version of this article was copied, with permission, from on 2012-10-05.
This page is licensed: CC BY-SA / Gnu FDL
Backup and restore implementations can help overcome specific technical challenges that would otherwise pose a barrier to meeting business requirements.
Each of these practices represents a trade-off. Understand risks before implementing any of these practices.
Technical challenge: restore time
Trade-off: increased ongoing overhead for backup processing
Backup data can be prepared for restore any time after it is produced and before it is used for restore. To expedite recovery, incremental backups can be pre-applied to the prior full backup to enable faster recovery. This may be done at the expense of recovery points, or at the expense of storage by maintaining copies of unmerged full and incremental backup directories.
Technical challenge: disk space limitations
Trade-off: modification of backup directory contents
Suggested method for moving restored data is to use --copy-back
as this method provides added safety. Where you might instead optimize for disk space savings, system resources, and time you may choose to instead use MariaDB Enterprise Backup's --move-back
option. Speed benefits are only present when backup files are on the same disk partition as the destination data directory.
The --move-back
option will result in the removal of all data files from the backup directory, so it is best to use this option only when you have an additional copy of your backup data in another location.
To restore from a backup by moving files, use the --move-back
option:
Technical challenge:: CPU bottlenecks
Trade-off: Increased workload during backups
MariaDB Enterprise Backup is a multi-threaded application that by default runs on a single thread. In cases where you have a host with multiple cores available, you can specify the number of threads you want it to use for parallel data file transfers using the --parallel
option:
Technical challenge: Backup resource overhead, backup duration
Trade-off: Increased restore complexity, restore process duration
Under normal operation an incremental backup is taken against an existing full backup. This allows you to further shorten the amount of time MariaDB Enterprise Backup locks MariaDB Enterprise Server while copying tablespaces. You can then apply the changes in the increment to the full backup with a --prepare
operation at leisure, without disrupting database operations.
MariaDB Enterprise Backup also supports incrementing from an incremental backup. In this operation, the --incremental-basedir
option points not to the full backup directory but rather to the previous incremental backup.
In preparing a backup to restore the data directory, apply the chain of incremental backups to the full backup in order. That is, first inc1/, then inc2/
, and so on:
Continue to apply all the incremental changes until you have applied all available to the backup. Then restore as usual:
Start MariaDB Enterprise Server on the restored data directory.
Technical challenge: Backup resource overhead, backup duration
Trade-off: Limited to platforms with volume-level snapshots, may require crash recovery
While MariaDB Enterprise Backups produces file-level backups, users on storage solutions may prefer to instead perform volume-level snapshots to minimize resource impact. This storage capability exists with some SAN, NAS, and volume manager platforms.
Snapshots occur point-in-time, so no preparation step is needed to ensure data is internally consistent. Snapshots occur while tablespaces are open, and a restored snapshot may need to undergo crash recovery.
Just as traditional full, incremental, and partial backups should be tested, so too should recovery from snapshots be tested on an ongoing basis.
MariaDB Enterprise Server includes functionality to reduce the impact of backup operations:
Connect with a client and issue a BACKUP STAGE START
statement and then a BACKUP STAGE BLOCK_COMMIT
statement.
Take the snapshot.
Issue a BACKUP STAGE END
statement.
Once the backup has been completed, remove all files which begin with the #sql prefix
. These files are generated when ALTER TABLE
occurs during a staged backup.
Retrieve, copy, or store the snapshot as is typical for your storage platform and as per business requirements to make the backup durable. This may require mounting the snapshot in some manner.
It is recommended to briefly prevent writes while snapshotting. Specific commands vary depending on storage platform, business requirements, and setup, but a general approach is to:
Connect with a client and issue a FLUSH TABLES WITH READ LOCK
statement, leaving the client connected.
Take the snapshot.
Issue an UNLOCK TABLES
statement, to remove the read lock.
Retrieve, copy, or store the snapshot as is typical for your storage platform and as per business requirements to make the backup durable. This may require mounting the snapshot in some manner.
This page is: Copyright © 2025 MariaDB. All rights reserved.
In the modern world, data importance is non-negotiable, and keeping data integrity and consistency is the top priority. Data stored in databases is vulnerable to system crashes, hardware problems, security breaches, and other failures causing data loss or corruption. To prevent database damage, it is important to back the data up regularly and implement the data restore policies. MariaDB, one of the most popular database management systems, provides several methods to configure routines for backing up and recovering data. The current guideline illustrates both processes performed with the help of dbForge Studio for MySQL which is also a fully-functional that has everything you need to accomplish the database-related tasks on MariaDB.
dbForge Studio for MySQL and MariaDB has a separate module dedicated to the data backing up and recovering jobs. Let us first look at how set the tool to create a MariaDB backup. Launch the Studio and go to Database > Backup and Restore > Backup Database. The Database Backup Wizard with several pages will appear. On the General page, specify the database in question and how to connect to it, then choose where to save the created backup file, and specify its name. There are additional optional settings – you can select to delete old files automatically, zip the output backup file, etc. When done, click Next.
On the Backup content page, select the objects to back up. Click Next.
The Options page. Here you can specify the details of the data backing up process. Plenty of available options allow you to configure this task precisely to meet the specific requirements. When done, click Next.
The Errors handling page. Here you configure how the Studio should handle the errors that might occur during the backing up process. Also, you can set the Studio to write the information about the errors it encountered into the log file.
You can save the project settings to apply them in the future. For this, in the left bottom corner of the Wizard, select one of the saving options: Save Project or Save Command Line. The latter allows saving settings as a backup script which you can execute from the command line at any time later.
The configuration process is complete. Click Backup to launch the data backing up.
Note: It is not obligatory to go through all the pages of the Wizard. The Backup button is available no matter on which page you are. Thus, you can launch the process of backing the data up whenever you have set everything you needed.
After you have clicked Backup, dbForge Studio for MySQL starts to create a MariaDB backup.
When this is done, you will see the confirmation message. Click Finish.
Backup and restore policies suggest creating regular backups on a daily, weekly, monthly, quarterly, and yearly basis. Besides, to minimize the consequences of possible data loss, it is highly recommended make a backup before making any changes to a database, such as upgrading, modifying data, redesigning the structure, etc. Simply speaking, you always need a fresh backup to restore the most up-to-date database version. To ensure regular backups on schedule, you can use a batch file created with the help of the Studio and Windows Task Scheduler, where you need to create and schedule the backup task.
This is an even faster task, done in half as many steps.
The process of data recovery from the backup file is simple. It only takes several clicks: Launch dbForge Studio for MySQL and go to Database > Backup and Restore > Restore Database. The Database Restore Wizard will appear. Specify the database name, its connection parameters, and the path to the backup file you want to restore. Then click Restore, and the process will start immediately.
When the process is complete, click Finish.
More information about this essential feature is available on the – it explores the routines performed on MySQL, but they fully apply to MariaDB backups. You can use the same IDE and the same workflow.
To test-drive this and other features of the Studio (the IDE includes all the tools necessary for the development, management, and administration of databases on MariaDB), . dbForge Studio for MySQL and MariaDB boasts truly advanced functionality that will help your teams deliver more value.
This page is licensed: CC BY-SA / Gnu FDL
are functions that are computed over a sequence of rows and return one result for the sequence of rows.
Creating a custom aggregate function is done using the statement with two main differences:
The addition of the AGGREGATE keyword, so CREATE AGGREGATE FUNCTION
The FETCH GROUP NEXT ROW
instruction inside the loop
Oracle PL/SQL compatibility using SQL/PL is provided
Stored aggregate functions were a project by Varun Gupta.
First a simplified example:
A non-trivial example that cannot easily be rewritten using existing functions:
This uses the same marks table as created above.
This page is licensed: CC BY-SA / Gnu FDL
The RANGE partitioning type is used to assign each partition a range of values generated by the partitioning expression. Ranges must be ordered, contiguous and non-overlapping. The minimum value is always included in the first range. The highest value may or may not be included in the last range.
A variant of this partitioning method, , allows us to use multiple columns and more datatypes.
The last part of a statement can be definition of the new table's partitions. In the case of RANGE partitioning, the syntax is the following:
PARTITION BY RANGE indicates that the partitioning type is RANGE.
The partitioning_expression
is an SQL expression that returns a value from each row. In the simplest cases, it is a column name. This value is used to determine which partition should contain a row.
partition_name
is the name of a partition.
value
indicates the upper bound for that partition. The values must be ascending. For the first partition, the lower limit is NULL. When trying to insert a row, if its value is higher than the upper limit of the last partition, the row will be rejected (with an error, if the keyword is not used).
As a catchall, MAXVALUE can be specified as a value for the last partition. Note however that in order to append a new partition, it is not possible to use ; rather must be used.
A typical use case is when we want to partition a table whose rows refer to a moment or period in time; for example commercial transactions, blog posts, or events of some kind. We can partition the table by year, to keep all recent data in one partition and distribute historical data in big partitions that are stored on slower disks. Or, if our queries always read rows which refer to the same month or week, we can partition the table by month or year week (in this case, historical data and recent data will be stored together).
values also represent a chronological order. So, these values can be used to store old data in separate partitions. However, partitioning by id is not the best choice if we usually query a table by date.
In the following example, we will partition a log table by year.
As an alternative, we can partition the table by both year and month:
As you can see, we used the function to accomplish the purpose. Also, the first two partitions cover longer periods of time (probably because the logged activities were less intensive).
In both cases, when our tables become huge and we don't need to store all historical data any more, we can drop the oldest partitions in this way:
We will still be able to drop a partition that does not contain the oldest data, but all rows stored in it will disappear.
Example of an error when inserting outside a defined partition range:
Unless the IGNORE keyword is used:
An alternative definition with MAXVALUE as a catchall:
This page is licensed: CC BY-SA / Gnu FDL
This is a description of the different stages in mariadb-backup, what they do and why they are needed.
Note that a few items are marked with TODO
; these are things we are working on and will be in next version of mariadb-backup.
Connect to mysqld instance, find out important variables (datadir ,InnoDB pagesize, encryption keys, encryption plugin etc)
Scan the database directory, datadir
, looking for InnoDB tablespaces, load the tablespaces (basically, it is an “open” in InnoDB sense)
If --lock-ddl-per-table is used:
Do MDL locks, for InnoDB tablespaces that we want to copy. This is to ensure that there are no ALTER, RENAME , TRUNCATE or DROP TABLE on any of the tables that we want to copy.
This is implemented with:
If lock-ddl-per-table is not done, then mariadb-backup would have to know all tables that were created or altered during the backup. See .
Start a dedicated thread in mariadb-backup to copy InnoDB redo log (ib_logfile*
).
This is needed to record all changes done while the backup is running. (The redo log logically is a single circular file, split into innodb_log_files_in_group files.)
The log is also used to see detect if any truncate or online alter tables are used.
The assumption is that the copy thread will be able to keep up with server. It should always be able keep up, if the redo log is big enough.
Copy all selected tablespaces, file by file, in dedicated threads in mariadb-backup without involving the mysqld server.
This is special “careful” copy, it looks for page-level consistency by checking the checksum.
The files are not point-in-time consistent as data may change during copy.
The idea is that InnoDB recovery would make it point-in-time consistent.
Copy Aria log files (TODO)
Execute FLUSH TABLE WITH READ LOCK. This is default, but may be omitted with the -–no-lock
parameter. The reason why FLUSH
is needed is to ensure that all tables are in a consistent state at the exact same point in time, independent of storage engine.
If --lock-ddl-per-table
is used and there is a user query waiting for MDL, the user query will be killed to resolve a deadlock. Note that these are only queries of type ALTER, DROP, TRUNCATE or RENAME TABLE. ()
Copy .frm
, MyISAM
, Aria
and other storage engine files
If MyRocks
is used, create rocksdb checkpoint via "set rocksdb_create_checkpoint=$rocksdb_data_dir/mariadb-backup_rocksdb_checkpoint " command. The result of it is a directory with hardlinks to MyRocks files. Copy the checkpoint directory to the backup (or create hardlinks in backup directory is on the same partition as data directory). Remove the checkpoint directory.
Copy tables that were created while the backup was running and do rename files that were changed during backup (since )
Copy the rest of InnoDB redo log, stop redo-log-copy thread
Copy changes to Aria log files (They are append only, so this is easy to do) (TODO)
Write some metadata info (binlog position)
If FLUSH TABLE WITH READ LOCK was done:
execute: UNLOCK TABLES
If --lock-ddl-per-table
was done:
execute COMMIT
If log tables exists:
Take MDL lock for log tables
Copy part of log tables that wasn't copied before
Unlock log tables
If FLUSH TABLE WITH READ LOCK is not used, then only InnoDB tables will be consistent (not the privilege tables in the mysql database or the binary log). The backup point depends on the content of the redo log within the backup itself.
This page is licensed: CC BY-SA / Gnu FDL
From , uses asynchronous I/O to read from and write to disk asynchronously. This forms part of the InnoDB Background Thread Pool.
Each asynchronous IO operation goes through multiple stages:
SUBMITTED – The IO operation is initiated.
For asynchronous writes, this typically occurs in the buffer pool flushing code.
For asynchronous reads, this may happen during buffer pool loading at startup or in prefetching logic.
COMPLETED_IN_OS – The operating system notifies InnoDB that the I/O operation is complete.
If using libaio or io_uring, a dedicated thread handles this notification.
The completed IO operation is then submitted to InnoDB’s internal thread pool (tpool).
EXECUTING_COMPLETION_TASK – A tpool thread processes the completion task for the IO operation.
COMPLETED – The IO operation is fully handled.
The total number of pending asynchronous IO operations is limited by:
where number_of_IO_threads refers to either or .
Each IO operation is associated with an IO slot, which contains necessary metadata such as the file handle, operation type, offset, length, and any OS error codes. Initially, all total_count slots are free, but as pending IO requests accumulate, slots get occupied. If all slots are in use, additional IO requests must wait for a free slot.
The number of completion tasks (EXECUTING_COMPLETION_TASK stage) that can run in parallel is also limited by or . If too many IO operations complete simultaneously, they cannot all be processed in parallel and must be queued, respecting the thread limit.
From , a number of status variables were added to give insight into the above operations:
– Number of read IO operations currently in progress (from SUBMITTED to COMPLETED).
– Number of read IO operations currently in the EXECUTING_COMPLETION_TASK state.
– Total number of read completion tasks that have finished execution.
– Current size of the queue (see ).
– Total wait time for a free IO slot (see Waiting for IO Slots).
– Total number of read operations that were queued (see ). Includes those still waiting and making up innodb_async_reads_queue_size
.
Similar variables exist for write operations:
This page is licensed: CC BY-SA / Gnu FDL
The DROP FUNCTION statement is used to drop a or a user-defined function (UDF). That is, the specified routine is removed from the server, along with all privileges specific to the function. You must have the ALTER ROUTINE
for the routine in order to drop it. If the server system variable is set, both the ALTER ROUTINE
and EXECUTE
privileges are granted automatically to the routine creator - see .
The IF EXISTS
clause is a MySQL/MariaDB extension. It
prevents an error from occurring if the function does not exist. ANOTE
is produced that can be viewed with .
For dropping a (UDF), see .
This page is licensed: GPLv2, originally from
This primer is designed to teach you the basics of getting information into and out of an existing MariaDB database using the command-line client program. It's not a complete reference and will not touch on any advanced topics. It is just a quick jump-start into using MariaDB.
Log into your MariaDB server from the command-line like so:
Replace user_name with your database username. Replace ip_address with the host name or address of your server. If you're accessing MariaDB from the same server you're logged into, then don't include -h
and the ip_address. Replace db_name with the name of the database you want to access (such as test, which sometimes comes already created for testing purposes - note that Windows does not create this database, and some setups may also have removed the test
database by running , in which case you can leave the db_name out).
When prompted to enter your password, enter it. If your login is successful you should see something that looks similar to this:
This is where you will enter in all of your SQL statements. More about those later. For now, let's look at the components of the prompt: The "MariaDB" part means you that you are connected to a MariaDB database server. The word between the brackets is the name of your default database, the test database in this example.
To make changes to a database or to retrieve data, you will need to enter an SQL statement. SQL stands for Structured Query Language. An SQL statement that requests data is called a query. Databases store information in tables. They're are similar to spreadsheets, but much more efficient at managing data.
Note that the test database may not contain any data yet. If you want to follow along with the primer, copy and paste the following into the client. This will create the tables we will use and add some data to them. Don't worry about understanding them yet; we'll get to that later.
Notice the semi-colons used above. The client lets you enter very complex SQL statements over multiple lines. It won't send an SQL statement until you type a semi-colon and hit [Enter].
Let's look at what you've done so far. Enter the following:
Notice that this displays a list of the tables in the database. If you didn't already have tables in your test
database, your results should look the same as above. Let's now enter the following to get information about one of these tables:
The main bit of information of interest to us is the Field column. The other columns provide useful information about the structure and type of data in the database, but the Field column gives us the names, which is needed to retrieve data from the table.
Let's retrieve data from the books
table. We'll do so by executing a statement like so:
This SQL statement or query asks the database to show us all of the data in the books
table. The wildcard ('*
') character indicates to select all columns.
Suppose now that we want to add another book to this table. We'll add the book, Lair of Bones. To insert data into a table, you would use the statement. To insert information on a book, we would enter something like this:
Notice that we put a list of columns in parentheses after the name of the table, then we enter the keyword VALUES
followed by a list of values in parentheses--in the same order as the columns were listed. We could put the columns in a different order, as long as the values are in the same order as we list the columns. Notice the message that was returned indicates that the execution of the SQL statement went fine and one row was entered.
Execute the following SQL statement again and see what results are returned:
You should see the data you just entered on the last row of the results. In looking at the data for the other books, suppose we notice that the title of the seventh book is spelled wrong. It should be spelled The Hobbit, not The Hobbbit. We will need to update the data for that row.
To change data in a table, you will use the statement. Let's change the spelling of the book mentioned above. To do this, enter the following:
Notice the syntax of this SQL statement. The SET
clause is where you list the columns and the values to set them. The WHERE
clause says that you want to update only rows in which the BookID
column has a value of 7
, of which there are only one. You can see from the message it returned that one row matched the WHERE
clause and one row was changed. There are no warnings because everything went fine. Execute the from earlier to see that the data changed.
As you can see, using MariaDB isn't very difficult. You just have to understand the syntax of SQL since it doesn't allow for typing mistakes or things in the wrong order or other deviations.
HASH partitioning is a form of in which the server takes care of the partition in which to place the data, ensuring an even distribution among the partitions.
It requires a column value, or an expression based on a column value, which is hashed, as well as the number of partitions into which to divide the table.
partitioning_expression needs to return a non-constant, deterministic integer. It is evaluated for each insert and update, so overly complex expressions can lead to performance issues. A hashing function operating on a single column, and where the value changes consistently with the column value, allows for easy pruning on ranges of partitions, and is usually a better choice. For this reason, using multiple columns in a hashing expression is not usually recommended.
number_of_partitions is a positive integer specifying the number of partitions into which to divide the table. If the PARTITIONS
clause is omitted, the default number of partitions is one.
To determine which partition to use, the following calculation is performed: MOD(partitioning_expression, number_of_partitions)
For example, if the expression is TO_DAYS(datetime_column) and the number of partitions is 5, inserting a datetime value of '2023-11-15' would determine the partition as follows:
TO_DAYS('2023-11-15') gives a value of 739204
MOD(739204,5) returns 4 so the 4th partition is used.
HASH partitioning making use of the modulus of the hashing function's value. The is similar, using a powers-of-two algorithm. Data is more likely to be evenly distributed over the partitions than with the LINEAR HASH partitioning type, however, adding, dropping, merging and splitting partitions is much slower.
Using the for more information:
for suggestions on using partitions
This page is licensed: CC BY-SA / Gnu FDL
CREATE TABLE t1 ( a INT );
CREATE TABLE t2 ( b INT );
INSERT INTO t1 VALUES (1), (2), (3);
INSERT INTO t2 VALUES (2), (4);
SELECT * FROM t1 INNER JOIN t2 ON t1.a = t2.b;
SELECT * FROM t1 CROSS JOIN t2;
SELECT * FROM t1 LEFT JOIN t2 ON t1.a = t2.b;
SELECT * FROM t2 LEFT JOIN t1 ON t1.a = t2.b;
SELECT * FROM t1, t2 WHERE t1.a = t2.b;
SELECT * FROM t1, t2;
SELECT * FROM t1 INNER JOIN t2 ON t1.a = t2.b;
------ ------
| a | b |
------ ------
| 2 | 2 |
------ ------
1 row in set (0.00 sec)
SELECT * FROM t1 CROSS JOIN t2;
------ ------
| a | b |
------ ------
| 1 | 2 |
| 2 | 2 |
| 3 | 2 |
| 1 | 4 |
| 2 | 4 |
| 3 | 4 |
------ ------
6 rows in set (0.00 sec)
SELECT * FROM t1 LEFT JOIN t2 ON t1.a = t2.b;
------ ------
| a | b |
------ ------
| 1 | NULL |
| 2 | 2 |
| 3 | NULL |
------ ------
3 rows in set (0.00 sec)
SELECT * FROM t2 LEFT JOIN t1 ON t1.a = t2.b;
------ ------
| b | a |
------ ------
| 2 | 2 |
| 4 | NULL |
------ ------
2 rows in set (0.00 sec)
mariadb-backup --move-back --target-dir=/data/backups/full
mariadb-backup --backup \
--target-dir=/data/backups/full \
--user=mariadb-backup \
--password=mbu_passwd \
--parallel=12
mariadb-backup --backup \
--incremental-basedir=/data/backups/inc1 \
--target-dir=/data/backups/inc2 \
--user=mariadb-backup \
--password=mbu_passwd
mariadb-backup --prepare \
--target-dir=/data/backups/full \
--incremental-dir=/data/backups/inc1
mariadb-backup --prepare \
--target-dir=/data/backups/full \
--incremental-dir=/data/backups/inc2
mariadb-backup --copy-back --target-dir=/data/backups/full
chown -R mysql:mysql /var/lib/mysql
PARTITION BY RANGE (partitioning_expression)
(
PARTITION partition_name VALUES LESS THAN (value),
[ PARTITION partition_name VALUES LESS THAN (value), ... ]
[ PARTITION partition_name VALUES LESS THAN MAXVALUE ]
)
CREATE TABLE log
(
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
dt DATETIME NOT NULL,
user INT UNSIGNED,
PRIMARY KEY (id, dt)
)
ENGINE = InnoDB
PARTITION BY RANGE (YEAR(dt))
(
PARTITION p0 VALUES LESS THAN (2013),
PARTITION p1 VALUES LESS THAN (2014),
PARTITION p2 VALUES LESS THAN (2015),
PARTITION p3 VALUES LESS THAN (2016)
);
CREATE TABLE log2
(
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
ts TIMESTAMP NOT NULL,
user INT UNSIGNED,
PRIMARY KEY (id, ts)
)
ENGINE = InnoDB
PARTITION BY RANGE (UNIX_TIMESTAMP(ts))
(
PARTITION p0 VALUES LESS THAN (UNIX_TIMESTAMP('2014-08-01 00:00:00')),
PARTITION p1 VALUES LESS THAN (UNIX_TIMESTAMP('2014-11-01 00:00:00')),
PARTITION p2 VALUES LESS THAN (UNIX_TIMESTAMP('2015-01-01 00:00:00')),
PARTITION p3 VALUES LESS THAN (UNIX_TIMESTAMP('2015-02-01 00:00:00'))
);
ALTER TABLE log DROP PARTITION p0;
INSERT INTO log(id,dt) VALUES
(1, '2016-01-01 01:01:01'),
(2, '2015-01-01 01:01:01');
ERROR 1526 (HY000): Table has no partition for value 2016
INSERT IGNORE INTO log(id,dt) VALUES
(1, '2016-01-01 01:01:01'),
(2, '2015-01-01 01:01:01');
SELECT * FROM log;
+----+---------------------+------+
| id | timestamp | user |
+----+---------------------+------+
| 2 | 2015-01-01 01:01:01 | NULL |
+----+---------------------+------+
CREATE TABLE log
(
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
dt DATETIME NOT NULL,
user INT UNSIGNED,
PRIMARY KEY (id, dt)
)
ENGINE = InnoDB
PARTITION BY RANGE (YEAR(dt))
(
PARTITION p0 VALUES LESS THAN (2013),
PARTITION p1 VALUES LESS THAN (2014),
PARTITION p2 VALUES LESS THAN (2015),
PARTITION p3 VALUES LESS THAN (2016),
PARTITION p4 VALUES LESS THAN MAXVALUE
);
DROP FUNCTION [IF EXISTS] f_name
DROP FUNCTION hello;
Query OK, 0 rows affected (0.042 sec)
DROP FUNCTION hello;
ERROR 1305 (42000): FUNCTION test.hello does not exist
DROP FUNCTION IF EXISTS hello;
Query OK, 0 rows affected, 1 warning (0.000 sec)
SHOW WARNINGS;
+-------+------+------------------------------------+
| Level | Code | Message |
+-------+------+------------------------------------+
| Note | 1305 | FUNCTION test.hello does not exist |
+-------+------+------------------------------------+
PARTITION BY HASH (partitioning_expression)
[PARTITIONS(number_of_partitions)]
CREATE OR REPLACE TABLE t1 (c1 INT, c2 DATETIME)
PARTITION BY HASH(TO_DAYS(c2))
PARTITIONS 5;
INSERT INTO t1 VALUES (1,'2023-11-15');
SELECT PARTITION_NAME,TABLE_ROWS FROM INFORMATION_SCHEMA.PARTITIONS
WHERE TABLE_SCHEMA='test' AND TABLE_NAME='t1';
+----------------+------------+
| PARTITION_NAME | TABLE_ROWS |
+----------------+------------+
| p0 | 0 |
| p1 | 0 |
| p2 | 0 |
| p3 | 0 |
| p4 | 1 |
+----------------+------------+
The BACKUP STAGE commands are a set of commands to make it possible to make an efficient external backup tool. How mariadb-backup uses these commands depends on whether you are using the version that is bundled with MariaDB Community Server or the version that is bundled with MariaDB Enterprise Server.
BACKUP STAGE
Commands in MariaDB Community ServerThe BACKUP STAGE commands are supported. However, the version of mariadb-backup
that is bundled with MariaDB Community Server does not yet use the BACKUP STAGE
commands in the most efficient way. mariadb-backup simply executes the following BACKUP STAGE
commands to lock the database:
BACKUP STAGE START;
BACKUP STAGE BLOCK_COMMIT;
When the backup is complete, it executes the following BACKUP STAGE
command to unlock the database:
BACKUP STAGE END;
If you would like to use a version of mariadb-backup
that uses the BACKUP STAGE commands in the most efficient way, then your best option is to use MariaDB Backup that is bundled with MariaDB Enterprise Server.
BACKUP STAGE
in MariaDB Community ServerCopy some transactional tables.
InnoDB (i.e. ibdataN
and file extensions .ibd
and .isl
)
Copy the tail of some transaction logs.
The tail of the InnoDB redo log (i.e. ib_logfileN
files) will be copied for InnoDB tables.
BACKUP STAGE START
in MariaDB Community Servermariadb-backup from MariaDB Community Server does not currently perform any tasks in the START
stage.
BACKUP STAGE FLUSH
in MariaDB Community Servermariadb-backup from MariaDB Community Server does not currently perform any tasks in the FLUSH
stage.
BACKUP STAGE BLOCK_DDL
in MariaDB Community Servermariadb-backup from MariaDB Community Server does not currently perform any tasks in the BLOCK_DDL
stage.
BACKUP STAGE BLOCK_COMMIT
in MariaDB Community Servermariadb-backup
from MariaDB Community Server performs the following tasks in the BLOCK_COMMIT
stage:
Copy other files.
i.e. file extensions .frm
, .isl
, .TRG
, .TRN
, .opt
, .par
Copy some transactional tables.
Aria (i.e. aria_log_control
and file extensions .MAD
and .MAI
)
Copy the non-transactional tables.
MyISAM (i.e. file extensions .MYD
and .MYI
)
MERGE (i.e. file extensions .MRG
)
ARCHIVE (i.e. file extensions .ARM
and .ARZ
)
CSV (i.e. file extensions .CSM
and .CSV
)
Create a MyRocks checkpoint using the rocksdb_create_checkpoint system variable.
Copy the tail of some transaction logs.
The tail of the InnoDB redo log (i.e. ib_logfileN
files) will be copied for InnoDB tables.
Save the binary log position to xtrabackup_binlog_info.
Save the Galera Cluster state information to xtrabackup_galera_info.
BACKUP STAGE END
in MariaDB Community Servermariadb-backup from MariaDB Community Server performs the following tasks in the END
stage:
Copy the MyRocks checkpoint into the backup.
BACKUP STAGE
Commands in MariaDB Enterprise ServerThe following sections describe how the MariaDB Backup version of mariadb-backup that is bundled with MariaDB Enterprise Server uses each BACKUP STAGE command in an efficient way.
BACKUP STAGE START
in MariaDB Enterprise Servermariadb-backup from MariaDB Enterprise Server performs the following tasks in the START
stage:
Copy all transactional tables.
InnoDB (i.e. ibdataN
and file extensions .ibd
and .isl
)
Aria (i.e. aria_log_control
and file extensions .MAD
and .MAI
)
Copy the tail of all transaction logs.
The tail of the InnoDB redo log (i.e. ib_logfileN
files) will be copied for InnoDB tables.
The tail of the Aria redo log (i.e. aria_log.N
files) will be copied for Aria tables.
BACKUP STAGE FLUSH
in MariaDB Enterprise Servermariadb-backup from MariaDB Enterprise Server performs the following tasks in the FLUSH
stage:
Copy all non-transactional tables that are not in use. This list of used tables is found with SHOW OPEN TABLES.
MyISAM (i.e. file extensions .MYD
and .MYI
)
MERGE (i.e. file extensions .MRG
)
ARCHIVE (i.e. file extensions .ARM
and .ARZ
)
CSV (i.e. file extensions .CSM
and .CSV
)
Copy the tail of all transaction logs.
The tail of the InnoDB redo log (i.e. ib_logfileN
files) will be copied for InnoDB tables.
The tail of the Aria redo log (i.e. aria_log.N
files) will be copied for Aria tables.
BACKUP STAGE BLOCK_DDL
in MariaDB Enterprise Servermariadb-backup from MariaDB Enterprise Server performs the following tasks in the BLOCK_DDL
stage:
Copy other files.
i.e. file extensions .frm
, .isl
, .TRG
, .TRN
, .opt
, .par
Copy the non-transactional tables that were in use during BACKUP STAGE FLUSH
.
MyISAM (i.e. file extensions .MYD
and .MYI
)
MERGE (i.e. file extensions .MRG
)
ARCHIVE (i.e. file extensions .ARM
and .ARZ
)
CSV (i.e. file extensions .CSM
and .CSV
)
Check ddl.log
for DDL executed before the BLOCK DDL
stage.
The file names of newly created tables can be read from ddl.log
.
The file names of dropped tables can also be read from ddl.log
.
The file names of renamed tables can also be read from ddl.log
, so the files can be renamed instead of re-copying them.
Copy changes to system log tables.
mysql.general_log
mysql.slow_log
This is easy as these are append only.
Copy the tail of all transaction logs.
The tail of the InnoDB redo log (i.e. ib_logfileN
files) will be copied for InnoDB tables.
The tail of the Aria redo log (i.e. aria_log.N
files) will be copied for Aria tables.
BACKUP STAGE BLOCK_COMMIT
in MariaDB Enterprise Servermariadb-backup from MariaDB Enterprise Server performs the following tasks in the BLOCK_COMMIT
stage:
Create a MyRocks checkpoint using the rocksdb_create_checkpoint system variable.
Copy changes to system log tables.
mysql.general_log
mysql.slow_log
This is easy as these are append only.
Copy changes to statistics tables.
mysql.table_stats
mysql.column_stats
mysql.index_stats
Copy the tail of all transaction logs.
The tail of the InnoDB redo log (i.e. ib_logfileN
files) will be copied for InnoDB tables.
The tail of the Aria redo log (i.e. aria_log.N
files) will be copied for Aria tables.
Save the binary log position to xtrabackup_binlog_info.
Save the Galera Cluster state information to xtrabackup_galera_info.
BACKUP STAGE END
in MariaDB Enterprise Servermariadb-backup from MariaDB Enterprise Server performs the following tasks in the END
stage:
Copy the MyRocks checkpoint into the backup.
This page is licensed: CC BY-SA / Gnu FDL
InnoDB page cleaner threads flush dirty pages from the InnoDB buffer pool. These dirty pages are flushed using a least-recently used (LRU) algorithm.
The innodb_max_dirty_pages_pct variable specifies the maximum percentage of unwritten (dirty) pages in the buffer pool. If this percentage is exceeded, flushing will take place.
The innodb_max_dirty_pages_pct_lwm variable determines the low-water mark percentage of dirty pages that will enable preflushing to lower the dirty page ratio. The value 0 (the default) means that there will be no separate background flushing so long as:
the share of dirty pages does not exceed innodb_max_dirty_pages_pct
the last checkpoint age (LSN difference since the latest checkpoint) does not exceed innodb_log_file_size (minus some safety margin)
the buffer pool is not running out of space, which could trigger eviction flushing
To make flushing more eager, set to a higher value, for example SET GLOBAL innodb_max_dirty_pages_pct_lwm=0.001;
The innodb_page_cleaners system variable makes it possible to use multiple InnoDB page cleaner threads. It is deprecated and ignored now as the original reasons for splitting the buffer pool have mostly gone away.
The number of InnoDB page cleaner threads can be configured by setting the innodb_page_cleaners system variable. The system variable can be set in a server option group in an option file prior to starting up the server. For example:
[mariadb]
...
innodb_page_cleaners=8
The system variable can be changed dynamically with SET GLOBAL. For example:
SET GLOBAL innodb_page_cleaners=8;
This system variable's default value is either 4
or the configured value of the innodb_buffer_pool_instances system variable, whichever is lower.
Since the original reasons for splitting the buffer pool have mostly gone away, only a single InnoDB page cleaner thread is supported.
InnoDB's multi-thread flush feature can be enabled by setting the innodb_use_mtflush system variable. The number of threads cane be configured by setting the innodb_mtflush_threads system variable. This system variable can be set in a server option group in an option file prior to starting up the server. For example:
[mariadb]
...
innodb_use_mtflush = ON
innodb_mtflush_threads = 8
The innodb_mtflush_threads system variable's default value is 8
. The maximum value is 64
. In multi-core systems, it is recommended to set its value close to the configured value of the innodb_buffer_pool_instances system variable. However, it is also recommended to use your own benchmarks to find a suitable value for your particular application.
InnoDB's multi-thread flush feature is deprecated. Use multiple InnoDB page cleaner threads instead.
Increasing the amount of I/O capacity available to InnoDB can also help increase the performance of page flushing.
The amount of I/O capacity available to InnoDB can be configured by setting the innodb_io_capacity system variable. This system variable can be changed dynamically with SET GLOBAL. For example:
SET GLOBAL innodb_io_capacity=20000;
This system variable can also be set in a server option group in an option file prior to starting up the server. For example:
[mariadb]
...
innodb_io_capacity=20000
The maximum amount of I/O capacity available to InnoDB in an emergency defaults to either 2000
or twice innodb_io_capacity, whichever is higher, or can be directly configured by setting the innodb_io_capacity_max system variable. This system variable can be changed dynamically with SET GLOBAL. For example:
SET GLOBAL innodb_io_capacity_max=20000;
This system variable can also be set in a server option group in an option file prior to starting up the server. For example:
[mariadb]
...
innodb_io_capacity_max=20000
This page is licensed: CC BY-SA / Gnu FDL
$ mariadb-backup --backup \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword
backup_type = full-backuped
from_lsn = 0
to_lsn = 1635102
last_lsn = 1635102
recover_binlog_info = 0
$ mariadb-backup --backup \
--target-dir=/var/mariadb/inc1/ \
--incremental-basedir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword
backup_type = incremental
from_lsn = 1635102
to_lsn = 1635114
last_lsn = 1635114
recover_binlog_info = 0
$ mariadb-backup --backup \
--target-dir=/var/mariadb/inc2/ \
--incremental-basedir=/var/mariadb/inc1/ \
--user=mariadb-backup --password=mypassword
# initial full backup
$ mariadb-backup --backup --stream=mbstream \
--user=mariadb-backup --password=mypassword \
--extra-lsndir=backup_base | gzip > backup_base.gz
# incremental backup
$ mariadb-backup --backup --stream=mbstream \
--incremental-basedir=backup_base \
--user=mariadb-backup --password=mypassword \
--extra-lsndir=backup_inc1 | gzip > backup-inc1.gz
$ mariadb-backup --prepare \
--target-dir=/var/mariadb/backup
$ mariadb-backup --prepare \
--target-dir=/var/mariadb/backup \
--incremental-dir=/var/mariadb/inc1
$ mariadb-backup --copy-back \
--target-dir=/var/mariadb/backup/
$ chown -R mysql:mysql /var/lib/mysql/
BEGIN
FOR EACH affected TABLE
SELECT 1 FROM <TABLE> LIMIT 0
CREATE AGGREGATE FUNCTION function_name (parameters) RETURNS return_type
BEGIN
ALL types of declarations
DECLARE CONTINUE HANDLER FOR NOT FOUND RETURN return_val;
LOOP
FETCH GROUP NEXT ROW; // fetches next row FROM TABLE
other instructions
END LOOP;
END
SET sql_mode=Oracle;
DELIMITER //
CREATE AGGREGATE FUNCTION function_name (parameters) RETURN return_type
declarations
BEGIN
LOOP
FETCH GROUP NEXT ROW; -- fetches next row from table
-- other instructions
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RETURN return_val;
END //
DELIMITER ;
CREATE TABLE marks(stud_id INT, grade_count INT);
INSERT INTO marks VALUES (1,6), (2,4), (3,7), (4,5), (5,8);
SELECT * FROM marks;
+---------+-------------+
| stud_id | grade_count |
+---------+-------------+
| 1 | 6 |
| 2 | 4 |
| 3 | 7 |
| 4 | 5 |
| 5 | 8 |
+---------+-------------+
DELIMITER //
CREATE AGGREGATE FUNCTION IF NOT EXISTS aggregate_count(x INT) RETURNS INT
BEGIN
DECLARE count_students INT DEFAULT 0;
DECLARE CONTINUE HANDLER FOR NOT FOUND
RETURN count_students;
LOOP
FETCH GROUP NEXT ROW;
IF x THEN
SET count_students = count_students+1;
END IF;
END LOOP;
END //
DELIMITER ;
DELIMITER //
CREATE AGGREGATE FUNCTION medi_int(x INT) RETURNS DOUBLE
BEGIN
DECLARE CONTINUE HANDLER FOR NOT FOUND
BEGIN
DECLARE res DOUBLE;
DECLARE cnt INT DEFAULT (SELECT COUNT(*) FROM tt);
DECLARE lim INT DEFAULT (cnt-1) DIV 2;
IF cnt % 2 = 0 THEN
SET res = (SELECT AVG(a) FROM (SELECT a FROM tt ORDER BY a LIMIT lim,2) ttt);
ELSE
SET res = (SELECT a FROM tt ORDER BY a LIMIT lim,1);
END IF;
DROP TEMPORARY TABLE tt;
RETURN res;
END;
CREATE TEMPORARY TABLE tt (a INT);
LOOP
FETCH GROUP NEXT ROW;
INSERT INTO tt VALUES (x);
END LOOP;
END //
DELIMITER ;
SET sql_mode=Oracle;
DELIMITER //
CREATE AGGREGATE FUNCTION aggregate_count(x INT) RETURN INT AS count_students INT DEFAULT 0;
BEGIN
LOOP
FETCH GROUP NEXT ROW;
IF x THEN
SET count_students := count_students+1;
END IF;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RETURN count_students;
END aggregate_count //
DELIMITER ;
SELECT aggregate_count(stud_id) FROM marks;
total_count = number_of_IO_threads * 256
mariadb -u user_name -p -h ip_address db_name
MariaDB [test]>
CREATE DATABASE IF NOT EXISTS test;
USE test;
CREATE TABLE IF NOT EXISTS books (
BookID INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
Title VARCHAR(100) NOT NULL,
SeriesID INT, AuthorID INT);
CREATE TABLE IF NOT EXISTS authors
(id INT NOT NULL PRIMARY KEY AUTO_INCREMENT);
CREATE TABLE IF NOT EXISTS series
(id INT NOT NULL PRIMARY KEY AUTO_INCREMENT);
INSERT INTO books (Title,SeriesID,AuthorID)
VALUES('The Fellowship of the Ring',1,1),
('The Two Towers',1,1), ('The Return of the King',1,1),
('The Sum of All Men',2,2), ('Brotherhood of the Wolf',2,2),
('Wizardborn',2,2), ('The Hobbbit',0,1);
SHOW TABLES;
+----------------+
| Tables_in_test |
+----------------+
| authors |
| books |
| series |
+----------------+
3 rows in set (0.00 sec)
DESCRIBE books;
+----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+--------------+------+-----+---------+----------------+
| BookID | int(11) | NO | PRI | NULL | auto_increment |
| Title | varchar(100) | NO | | NULL | |
| SeriesID | int(11) | YES | | NULL | |
| AuthorID | int(11) | YES | | NULL | |
+----------+--------------+------+-----+---------+----------------+
SELECT * FROM books;
+--------+----------------------------+----------+----------+
| BookID | Title | SeriesID | AuthorID |
+--------+----------------------------+----------+----------+
| 1 | The Fellowship of the Ring | 1 | 1 |
| 2 | The Two Towers | 1 | 1 |
| 3 | The Return of the King | 1 | 1 |
| 4 | The Sum of All Men | 2 | 2 |
| 5 | Brotherhood of the Wolf | 2 | 2 |
| 6 | Wizardborn | 2 | 2 |
| 7 | The Hobbbit | 0 | 1 |
+--------+----------------------------+----------+----------+
7 rows in set (0.00 sec)
INSERT INTO books (Title, SeriesID, AuthorID)
VALUES ("Lair of Bones", 2, 2);
Query OK, 1 row affected (0.00 sec)
SELECT * FROM books;
UPDATE books
SET Title = "The Hobbit"
WHERE BookID = 7;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
The innodb_autoinc_lock_mode system variable determines the lock mode when generating AUTO_INCREMENT values for InnoDB tables. These modes allow InnoDB to make significant performance optimizations in certain circumstances.
The innodb_autoinc_lock_mode system variable may be removed in a future release. See MDEV-19577 for more information.
When innodb_autoinc_lock_mode is set to 0
, InnoDB uses the traditional lock mode.
In this mode, InnoDB holds a table-level lock for all INSERT statements until the statement completes.
When innodb_autoinc_lock_mode is set to 1
, InnoDB uses the consecutive lock mode.
In this mode, InnoDB holds a table-level lock for all bulk INSERT statements (such as LOAD DATA or INSERT ... SELECT) until the end of the statement. For simple INSERT statements, no table-level lock is held. Instead, a lightweight mutex is used which scales significantly better. This is the default setting.
When innodb_autoinc_lock_mode is set to 2
, InnoDB uses the interleaved lock mode.
In this mode, InnoDB does not hold any table-level locks at all. This is the fastest and most scalable mode, but is not safe for statement-based replication.
The AUTO_INCREMENT value for an InnoDB table can be set for a table by executing the ALTER TABLE statement and specifying the AUTO_INCREMENT table option. For example:
ALTER TABLE tab AUTO_INCREMENT=100;
However, in and before, InnoDB stores the table's AUTO_INCREMENT counter in memory. In these versions, when the server restarts, the counter is re-initialized to the highest value found in the table. This means that the above operation can be undone if the server is restarted before any rows are written to the table.
In and later, the AUTO_INCREMENT counter is persistent, so this restriction is no longer present. Persistent, however, does not mean transactional. Gaps may still occur in some cases, such as if a INSERT IGNORE statement fails, or if a user executes ROLLBACK or ROLLBACK TO SAVEPOINT.
For example:
CREATE TABLE t1 (pk INT AUTO_INCREMENT PRIMARY KEY, i INT, UNIQUE (i)) ENGINE=InnoDB;
INSERT INTO t1 (i) VALUES (1),(2),(3);
INSERT IGNORE INTO t1 (pk, i) VALUES (100,1);
Query OK, 0 rows affected, 1 warning (0.099 sec)
SELECT * FROM t1;
+----+------+
| pk | i |
+----+------+
| 1 | 1 |
| 2 | 2 |
| 3 | 3 |
+----+------+
SHOW CREATE TABLE t1\G
*************************** 1. row ***************************
Table: t1
Create Table: CREATE TABLE `t1` (
`pk` int(11) NOT NULL AUTO_INCREMENT,
`i` int(11) DEFAULT NULL,
PRIMARY KEY (`pk`),
UNIQUE KEY `i` (`i`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1
If the server is restarted at this point, then the AUTO_INCREMENT counter will revert to 101
, which is the persistent value set as part of the failed INSERT IGNORE.
# Restart server
SHOW CREATE TABLE t1\G
*************************** 1. row ***************************
Table: t1
Create Table: CREATE TABLE `t1` (
`pk` int(11) NOT NULL AUTO_INCREMENT,
`i` int(11) DEFAULT NULL,
PRIMARY KEY (`pk`),
UNIQUE KEY `i` (`i`)
) ENGINE=InnoDB AUTO_INCREMENT=101 DEFAULT CHARSET=latin1
Sequences - an alternative to auto_increment available from
This page is licensed: CC BY-SA / Gnu FDL
Note that most of the background and redo log scrubbing code has been removed in . See MDEV-15528 and MDEV-21870.
Sometimes there is a requirement that when some data is deleted, it is really gone. This might be the case when one stores user's personal information or some other sensitive data. Normally though, when a row is deleted, the space is only marked as free on the page. It may eventually be overwritten, but there is no guarantee when that will happen. A copy of the deleted rows may also be present in the log files.
introduced support for InnoDB data scrubbing. Background threads periodically scan tablespaces and logs and remove all data that should be deleted. The number of background threads for tablespace scans is set by innodb-encryption-threads. Log scrubbing happens in a separate thread.
To configure scrubbing one can use the following variables:
Seconds
Check at this intervall if tablespaces needs scrubbing. Deprecated and ignored from .
Boolean
Enable scrubbing of compressed data by background threads. Deprecated and ignored from .
Seconds
Scrub spaces that were last scrubbed longer than this many seconds ago. Deprecated and ignored from .
Boolean
Enable scrubbing of uncompressed data by background threads. Deprecated and ignored from .
Boolean
Enable scrubbing of uncompressed data
Boolean
Enable redo log scrubbing. Deprecated and ignored from .
Bytes/sec
Redo log scrubbing speed in bytes/sec. Deprecated and ignored from .
Redo log scrubbing did not fully work as intended, and was deprecated and ignored in (MDEV-21870). If old log contents should be kept secret, then enabling innodb_encrypt_log or setting a smaller innodb_log_file_size could help.
The Information Schema INNODB_TABLESPACES_SCRUBBING table contains scrubbing information.
Scrubbing was donated to the MariaDB project by Google.
This page is licensed: CC BY-SA / Gnu FDL
A Stored Function is a defined function that is called from within an SQL statement like a regular function, and returns a single value.
Here's a skeleton example to see a stored function in action:
DELIMITER //
CREATE FUNCTION FortyTwo() RETURNS TINYINT DETERMINISTIC
BEGIN
DECLARE x TINYINT;
SET x = 42;
RETURN x;
END
//
DELIMITER ;
First, the delimiter is changed, since the function definition will contain the regular semicolon delimiter. See Delimiters in the mariadb client for more. Then the function is named FortyTwo
and defined to return a tinyin
. The DETERMINISTIC
keyword is not necessary in all cases (although if binary logging is on, leaving it out will throw an error), and is to help the query optimizer choose a query plan. A deterministic function is one that, given the same arguments, will always return the same result.
Next, the function body is placed between BEGIN and END statements. It declares a tinyint, X
, which is simply set to 42, and this is the result returned.
SELECT FortyTwo();
+------------+
| FortyTwo() |
+------------+
| 42 |
+------------+
Of course, a function that doesn't take any arguments is of little use. Here's a more complex example:
DELIMITER //
CREATE FUNCTION VatCents(price DECIMAL(10,2)) RETURNS INT DETERMINISTIC
BEGIN
DECLARE x INT;
SET x = price * 114;
RETURN x;
END //
Query OK, 0 rows affected (0.04 sec)
DELIMITER ;
This function takes an argument, price
which is defined as a DECIMAL, and returns an INT.
Take a look at the CREATE FUNCTION page for more details.
From , it is also possible to create stored aggregate functions.
To find which stored functions are running on the server, use SHOW FUNCTION STATUS.
SHOW FUNCTION STATUS\G
*************************** 1. row ***************************
Db: test
Name: VatCents
Type: FUNCTION
Definer: root@localhost
Modified: 2013-06-01 12:40:31
Created: 2013-06-01 12:40:31
Security_type: DEFINER
Comment:
character_set_client: utf8
collation_connection: utf8_general_ci
Database Collation: latin1_swedish_ci
1 row in set (0.00 sec)
or query the routines table in the INFORMATION_SCHEMA database directly:
SELECT ROUTINE_NAME FROM INFORMATION_SCHEMA.ROUTINES WHERE
ROUTINE_TYPE='FUNCTION';
+--------------+
| ROUTINE_NAME |
+--------------+
| VatCents |
+--------------+
To find out what the stored function does, use SHOW CREATE FUNCTION.
SHOW CREATE FUNCTION VatCents\G
*************************** 1. row ***************************
Function: VatCents
sql_mode:
Create Function: CREATE DEFINER=`root`@`localhost` FUNCTION `VatCents`(price DECIMAL(10,2)) RETURNS int(11)
DETERMINISTIC
BEGIN
DECLARE x INT;
SET x = price * 114;
RETURN x;
END
character_set_client: utf8
collation_connection: utf8_general_ci
Database Collation: latin1_swedish_ci
To drop a stored function, use the DROP FUNCTION statement.
DROP FUNCTION FortyTwo;
To change the characteristics of a stored function, use ALTER FUNCTION. Note that you cannot change the parameters or body of a stored function using this statement; to make such changes, you must drop and re-create the function using DROP FUNCTION and CREATE FUNCTION.
See the article Stored Routine Privileges.
This page is licensed: CC BY-SA / Gnu FDL
Binary logging can be row-based, statement-based, or a mix of the two. See Binary Log Formats for more details on the formats. If logging is statement-based, it is possible that a statement will have different effects on the master and on the slave.
Stored routines are particularly prone to this, for two main reasons:
stored routines can be non-deterministic, in other words non-repeatable, and therefore have different results each time they are run.
the slave thread executing the stored routine on the slave holds full privileges, while this may not be the case when the routine was run on the master.
The problems with replication will only occur with statement-based logging. If row-based logging is used, since changes are made to rows based on the master's rows, there is no possibility of the slave and master getting out of sync.
By default, with row-based replication, triggers run on the master, and the effects of their executions are replicated to the slaves. However, starting from , it is possible to run triggers on the slaves. See Running triggers on the slave for Row-based events.
If the following criteria are met, then there are some limitations on whether stored routines can be created:
The binary log is enabled, and the binlog_format system variable is set to STATEMENT
. See Binary Log Formats for more information.
The log_bin_trust_function_creators is set to OFF
, which is the default value.
If the above criteria are met, then the following limitations apply:
When a stored function is created, it must be declared as either DETERMINISTIC
, NO SQL
or READS SQL DATA
, or else an error will occur. MariaDB cannot check whether a function is deterministic, and relies on the correct definition being used.
To create or modify a stored function, a user requires the SUPER
privilege as well as the regular privileges. See Stored Routine Privileges for these details.
Triggers can also update data. The slave uses the DEFINER attribute to determine which user is taken to have created the trigger.
Note that the above limitations do no apply to stored procedures or to events.
A deterministic function:
DELIMITER //
CREATE FUNCTION trust_me(x INT)
RETURNS INT
DETERMINISTIC
READS SQL DATA
BEGIN
RETURN (x);
END //
DELIMITER ;
A non-deterministic function, since it uses the UUID_SHORT function:
DELIMITER //
CREATE FUNCTION dont_trust_me()
RETURNS INT
BEGIN
RETURN UUID_SHORT();
END //
DELIMITER ;
This page is licensed: CC BY-SA / Gnu FDL
Prior to , the InnoDB storage engine supports two different file formats.
In and before, the default file format for InnoDB tables can be chosen by setting the innodb_file_format.
In and before, the default file format isAntelope
. In and later, the default file format is Barracuda
and Antelope
is deprecated.
A table's tablespace is tagged with the lowest InnoDB file format that supports the table's row format. So, even if the Barracuda
file format is enabled, tables that use the COMPACT
or REDUNDANT
row formats will be tagged with the Antelope
file format in the information_schema.INNODB_SYS_TABLES table.
The InnoDB storage engine supports two different file formats:
Antelope
Barracuda
In and before, the default file format is Antelope
. In and later, the Antelope
file format is deprecated.
Antelope
is the original InnoDB file format. It supports the COMPACT
and REDUNDANT
row formats, but not the DYNAMIC
or COMPRESSED
row formats.
In and before, the Barracuda
file format is only supported if the innodb_file_per_table system variable is set to ON
. In and later, the default file format is Barracuda
and Antelope
is deprecated.
Barracuda
is a newer InnoDB file format. It supports the COMPACT
, REDUNDANT
, DYNAMIC
and COMPRESSED
row formats. Tables with large BLOB or TEXT columns in particular could benefit from the dynamic row format.
InnoDB might use new file formats in the future. Each format will have an identifier from 0 to 25, and a name. The names have already been decided, and are animal names listed in an alphabetical order: Antelope, Barracuda, Cheetah, Dragon, Elk, Fox, Gazelle, Hornet, Impala, Jaguar, Kangaroo, Leopard, Moose, Nautilus, Ocelot, Porpoise, Quail, Rabbit, Shark, Tiger, Urchin, Viper, Whale, Xenops, Yak and Zebra.
The information_schema.INNODB_SYS_TABLES table can be queried to see the file format used by a table.
A table's tablespace is tagged with the lowest InnoDB file format that supports the table's row format. So, even if the Barracuda
file format is enabled, tables that use the COMPACT
or REDUNDANT
row formats will be tagged with the Antelope
file format in the information_schema.INNODB_SYS_TABLES table.
Each tablespace is tagged with the id of the most recent file format used by one of its tables. All versions of InnoDB can read tables that use an older file format. However, it can not read from more recent formats. For this reason, each time InnoDB opens a table it checks the tablespace's format, and returns an error if a newer format is used.
This check can be skipped via the innodb_file_format_check variable. Beware that, is InnoDB tries to repair a table in an unknown format, the table will be corrupted! This happens on restart if innodb_file_format_check is disabled and the server crashed, or it was closed with fast shutdown.
To downgrade a table from the Barracuda format to Antelope, the table's ROW_FORMAT
can be set to a value supported by Antelope, via an ALTER TABLE statement. This recreates the indexes.
The Antelope format can be used to make sure that tables work on MariaDB and MySQL servers which are older than 5.5.
This page is licensed: CC BY-SA / Gnu FDL
The InnoDB storage engine in MariaDB Enterprise Server utilizes the Buffer Pool as a crucial in-memory cache. This Buffer Pool stores recently accessed data pages, enabling faster retrieval for subsequent requests. Recognizing patterns of access, InnoDB also employs predictive prefetching, caching nearby pages when sequential access is detected. To manage memory efficiently, a least recently used (LRU) algorithm is used to evict older, less frequently accessed pages.
To optimize server restarts, the Buffer Pool's contents can be preserved across shutdowns. At shutdown, the page numbers of all pages residing in the Buffer Pool are recorded. Upon the next startup, InnoDB reads this dump of page numbers and reloads the corresponding data pages from their respective data files, effectively avoiding a "cold" cache scenario.
The size of each individual page within the Buffer Pool is determined by the setting of the innodb_page_size system variable.
The buffer pool attempts to keep frequently-used blocks in the buffer, and so essentially works as two sublists, a new sublist of recently-used information, and an old sublist of older information. By default, 37% of the list is reserved for the old list.
When new information is accessed that doesn't appear in the list, it is placed at the top of the old list, the oldest item in the old list is removed, and everything else bumps back one position in the list.
When information is accessed that appears in the old list, it is moved to the top the new list, and everything above moves back one position.
The most important server system variable is innodb_buffer_pool_size. This size should contain most of the active data set of your server so that SQL request can work directly with information in the buffer pool cache. Starting at several gigabytes of memory is a good starting point if you have that RAM available. Once warmed up to its normal load there should be very few innodb_buffer_pool_reads compared to innodb_buffer_pool_read_requests. Look how these values change over a minute. If the change in innodb_buffer_pool_reads is less than 1% of the change in innodb_buffer_pool_read_requests then you have a good amount of usage. If you are getting the status variable innodb_buffer_pool_wait_free increasing then you don't have enough buffer pool (or your flushing isn't occurring frequently enough).
The larger the size, the longer it will take to initialize. On a modern 64-bit server with a 10GB memory pool, this can take five seconds or more. Increasing innodb_buffer_pool_chunk_size by several factors will reduce this significantly. could start up with a 96GB buffer pool in less than 1 second.
Make sure that the size is not too large, causing swapping. The benefit of a larger buffer pool size is more than undone if your operating system is regularly swapping.
The buffer pool can be set dynamically. See Setting Innodb Buffer Pool Size Dynamically.
The functionality described below was disabled in , and removed in , as the original reasons for splitting the buffer pool have mostly gone away.
The default 37% reserved for the old list can be adjusted by changing the value of innodb_old_blocks_pct. It can accept anything between 5% and 95%.
The innodb_old_blocks_time variable specifies the delay before a block can be moved from the old to the new sublist. 0
means no delay, while the default has been set to 1000
.
Before changing either of these values from their defaults, make sure you understand the impact and how your system currently uses the buffer. Their main reason for existence is to reduce the impact of full table scans, which are usually infrequent, but large, and previously could clear everything from the buffer. Setting a non-zero delay could help in situations where full table scans are performed in quick succession.
Temporarily changing these values can also be useful to avoid the negative impact of a full table scan, as explained in InnoDB logical backups.
When the server starts, the buffer pool is empty. As it starts to access data, the buffer pool will slowly be populated. As more data will be accessed, the most frequently accessed data will be put into the buffer pool, and old data may be evicted. This means that a certain period of time is necessary before the buffer pool is really useful. This period of time is called the warmup.
InnoDB can dump the buffer pool before the server shuts down, and restore it when it starts again. If this feature is used, no warmup is necessary. Use the innodb_buffer_pool_dump_at_shutdown and innodb_buffer_pool_load_at_startup system variables to enable or disable the buffer pool dump at shutdown and the restore at startup respectively.
It is also possible to dump the InnoDB buffer pool at any moment while the server is running, and it is possible to restore the last buffer pool dump at any moment. To do this, the special innodb_buffer_pool_dump_now and innodb_buffer_pool_load_now system variables can be set to ON. When selected, their value is always OFF.
A buffer pool restore, both at startup or at any other moment, can be aborted by setting innodb_buffer_pool_load_abort to ON.
The file which contains the buffer pool dump is specified via the innodb_buffer_pool_filename system variable.
This page is licensed: CC BY-SA / Gnu FDL
This article briefly discusses the main ways to backup MariaDB. For detailed descriptions and syntax, see the individual pages. More detail is in the process of being added.
Logical backups consist of the SQL statements necessary to restore the data, such as , and .
Physical backups are performed by copying the individual data files or directories.
The main differences are as follows:
logical backups are more flexible, as the data can be restored on other hardware configurations, MariaDB versions or even on another DBMS, while physical backups cannot be imported on significantly different hardware, a different DBMS, or potentially even a different MariaDB version.
logical backups can be performed at the level of database and table, while physical databases are the level of directories and files. In the and storage engines, each table has an equivalent set of files.
logical backups are larger in size than the equivalent physical backup.
logical backups takes more time to both backup and restore than the equivalent physical backup.
log files and configuration files are not part of a logical backup
mariadb-backup
The program is a fork of with added support for and .
mariadb-dump
(previously mysqldump) performs a logical backup. It is the most flexible way to perform a backup and restore, and a good choice when the data size is relatively small.
For large datasets, the backup file can be large, and the restore time lengthy.
mariadb-dump
dumps the data into SQL format (it can also dump into other formats, such as CSV or XML) which can then easily be imported into another database. The data can be imported into other versions of MariaDB, MySQL, or even another DBMS entirely, assuming there are no version or DBMS-specific statements in the dump.
mariadb-dump dumps triggers along with tables, as these are part of the table definition. However, , , and are not, and need extra parameters to be recreated explicitly (for example, --routines
and --events
). and are however also part of the system tables (for example ).
InnoDB uses the , which stores data and indexes from its tables in memory. This buffer is very important for performance. If InnoDB data doesn't fit the memory, it is important that the buffer contains the most frequently accessed data. However, last accessed data is candidate for insertion into the buffer pool. If not properly configured, when a table scan happens, InnoDB may copy the whole contents of a table into the buffer pool. The problem with logical backups is that they always imply full table scans.
An easy way to avoid this is by increasing the value of the system variable. It represents the number of milliseconds that must pass before a recently accessed page can be put into the "new" sublist in the buffer pool. Data which is accessed only once should remain in the "old" sublist. This means that they will soon be evicted from the buffer pool. Since during the backup process the "old" sublist is likely to store data that is not useful, one could also consider resizing it by changing the value of the system variable.
It is also possible to explicitly dump the buffer pool on disk before starting a logical backup, and restore it after the process. This will undo any negative change to the buffer pool which happens during the backup. To dump the buffer pool, the system variable can be set to ON
. To restore it, the system variable can be set to ON
.
mariadb-dump
ExamplesBacking up a single database
Restoring or loading the database
See the page for detailed syntax and examples.
mariadb-hotcopy
performs a physical backup, and works only for backing up and tables. It can only be run on the same machine as the location of the database directories.
mariadb-hotcopy
ExamplesPercona XtraBackup is not supported in MariaDB. is the recommended backup method to use instead of Percona XtraBackup. See for more information.
is a tool for performing fast, hot backups. It was designed specifically for databases, but can be used with any storage engine (although not with and ). It is not included with MariaDB.
Some filesystems, like Veritas, support snapshots. During the snapshot, the table must be locked. The proper steps to obtain a snapshot are:
From the client, execute . The client must remain open.
From a shell, execute mount vxfs snapshot
The client can execute .
Copy the snapshot files.
From a shell, unmount the snapshot with umount snapshot
.
Widely-used physical backup method, using a Perl script as a wrapper. See for more information.
For details, see:
Besides the system utilities, it is possible to use third-party GUI tools to perform backup and restore operations. In this context, it is worth mentioning dbForge Studio for MySQL, a feature-rich database IDE that is fully compatible with MariaDB and delivers extensive backup functionality.
The backup and restore module of the Studio allows precise up to particular database objects. The feature of scheduling regular backups offers specific settings to handle errors and keep a log of them. Additionally, settings and configurations can be saved for later reuse.
These operations are wizard-aided allowing users to set up all tasks in a visual mode.
(mariadb.com blog)
This page is licensed: CC BY-SA / Gnu FDL
This method is to solve a flaw with mariadb-backup; it cannot do single database restores from a full backup easily. There is a , but it's a manual process which is fine for a few tables but if you have hundreds or even thousands of tables then it would be impossible to do quickly.
We can't just move the data files to the datadir as the tables are not registered in the engines, so the database will error. Currently, the only effective method is to a do full restore in a test database and then dump the database that requires restoring or running a partial backup.
This has only been tested with InnoDB. Also, if you have stored procedures or triggers then these will need to be deleted and recreated.
Some of the issues that this method overcomes:
Tables not registered in the InnoDB engine so will error when you try to select from a table if you move the data files into the datadir
Tables with foreign keys need to be created without keys, otherwise it will error when you discard the tablespace
Below is the process to perform a single database restore.
Firstly, we will need the table structure from a mariadb-dump backup with the --no-data option. I recommend this is done at least once per day or every six hours via a cronjob. As it is just the structure, it will be very fast.
Using SED to return only the table structure we require, then use vim or another text editor to make sure nothing is left.
I won’t go over the backup process, as this is done earlier in other documents, such as full-backup-and-restore-with-mariadb-backup. Prepare the backup with any incremental-backup-and-restores that you have, and then run the following on the full backup folder using the --export option to generate files with .cfg extensions which InnoDB will look for.
Once we have done these steps, we can then import the table structure. If you have used the --all-databases option, then you will need to either use SED or open it in a text editor and export out tables that you require. You will also need to log in to the database and create the database if the dump file doesn't. Run the following command below:
Once the structure is in the database, we have now registered the tables to the engine. Next, we will run the following statements in the information_schema database, to export statements to import/discard table spaces and drop and create foreign keys which we will use later. (edit the CONSTRAINT_SCHEMA and TABLE_SCHEMA WHERE clause to the database you are restoring. Also, add the following lines after your SELECT and before the FROM to have MariaDB export the files to the OS)
The following are the statements that we will need later.
Once we have run those statements, and they have been exported to a Linux directory or copied from a GUI interface.
Run the ALTER DROP KEYS statements in the database
Once completed, run the DROP TABLE SPACE statements in the database
Exit out the database and change into the directory of the full backup location. Run the following commands to copy all the .cfg and .ibd files to the datadir such as /var/lib/mysql/testdatabase (Change the datadir location if needed). Learn more about files that mariadb-backup generates with files-created-by-mariadb-backup
After moving the files, it is very important that MySQL is the owner of the files, otherwise it won't have access to them and will error when we import the tablespaces.
Run the import table spaces statements in the database.
Run the add key statements in the database
We have successfully restored a single database. To test that this has worked, we can do a basic check on some tables.
If you have a primary-replica set up, it would be best to follow the sets above for the primary node and then either take a full mariadb-dump or take a new full mariadb-backup and restore this to the replica. You can find more information about restoring a replica with mariadb-backup in Setting up a Replica with mariadb-backup
After running the below command, copy to the replica and use the LESS linux command to grab the change master statement. Remember to follow this process: Stop replica > restore data > run CHANGE MASTER statement > start replica again.
Please follow Setting up a Replica with mariadb-backup on restoring a replica with mariadb-backup
For this process to work with Galera cluster, we first need to understand that some statements are not replicated across Galera nodes. One of which is the DISCARD and IMPORT for ALTER TABLES statements, and these statements will need to be ran on all nodes. We also need to run the OS level steps on each server as seen below.
Run the ALTER DROP KEYS statements on ONE NODE as these are replicated.
Once completed, run the DROP TABLE SPACE statements on EVERY NODE, as these are not replicated.
Exit out the database and change into the directory of the full backup location. Run the following commands to copy all the .cfg and .ibd files to the datadir such as /var/lib/mysql/testdatabase (Change the datadir location if needed). Learn more about files that mariadb-backup generates with files-created-by-mariadb-backup. This step needs to be done on all nodes. You will need to copy the backup files to each node, we can use the same backup on all nodes.
After moving the files, it is very important that MySQL is the owner of the files, otherwise it won't have access to them and will error when we import the tablespaces.
Run the import table spaces statements on EVERY NODE.
Run the add key statements on ONE NODE
This page is licensed: CC BY-SA / Gnu FDL
mariadb-backup
creates the following files:
backup-my.cnf
During the backup, any server options relevant to mariadb-backup
are written to the backup-my.cnf
option file, so that they can be re-read later during the --prepare
stage.
ib_logfile0
In and later, mariadb-backup creates an empty InnoDB redo log file called ib_logfile0
as part of the --prepare stage. This file has 3 roles:
In the source server, ib_logfile0
is the first (and possibly the only) InnoDB redo log file.
In the non-prepared backup, ib_logfile0
contains all of the InnoDB redo log copied during the backup. Previous versions of mariadb-backup would use a file called xtrabackup_logfile for this.
During the --prepare stage, ib_logfile0
would previously be deleted. Now during the --prepare
stage, ib_logfile0
is initialized as an empty InnoDB redo log file. That way, if the backup is manually restored, any pre-existing InnoDB redo log files would get overwritten by the empty one. This helps to prevent certain kinds of known issues. For example, see mariadb-backup Overview: Manual Restore with Pre-existing InnoDB Redo Log files.
xtrabackup_logfile
In and before, mariadb-backup creates xtrabackup_logfile
to store the InnoDB redo log, In later versions, ib_logfile0 is created instead.
xtrabackup_binlog_info
This file stores the binary log file name and position that corresponds to the backup.
This file also stores the value of the gtid_current_pos system variable that correspond to the backup.
For example:
The values in this file are only guaranteed to be consistent with the backup if the --no-lock option was not provided when the backup was taken.
xtrabackup_binlog_pos_innodb
This file is created by mariadb-backup to provide the binary log file name and position when the --no-lock option is used. It can be used instead of the file "xtrabackup_binlog_info" to obtain transactionally consistent binlog coordinates from the backup of a master server with the --no-lock option to minimize the impact on a running server.
Whenever a transaction is committed inside InnoDB when the binary log is enabled, the corresponding binlog coordinates are written to the InnoDB redo log along with the transaction commit. This allows one to restore the binlog coordinates corresponding to the last commit done by InnoDB along with a backup.
The limitation of using "xtrabackup_binlog_pos_innodb" with the "--no-lock" option is that no DDL or modification of non-transactional tables should be done during the backup. If the last event in the binlog is a DDL/non-transactional update, the coordinates in the file "xtrabackup_binlog_pos_innodb" will be too old. But as long as only InnoDB updates are done during the backup, the coordinates will be correct.
xtrabackup_checkpoints
The xtrabackup_checkpoints
file contains metadata about the backup.
For example:
See below for a description of the fields.
If the --extra-lsndir
option is provided, then an extra copy of this file will be saved in that directory.
backup_type
If the backup is a non-prepared full backup or a non-prepared partial backup, then backup_type
is set to full-backuped
.
If the backup is a non-prepared incremental backup, then backup_type
is set to incremental
.
If the backup has already been prepared, then backup_type
is set to log-applied
.
from_lsn
If backup_type
is full-backuped
, then from_lsn
has the value of 0
.
If backup_type
is incremental
, then from_lsn
has the value of the log sequence number (LSN) at which the backup started reading from the InnoDB redo log. This is internally used by mariadb-backup when preparing incremental backups.
This value can be manually set during an incremental backup with the --incremental-lsn option. However, it is generally better to let mariadb-backup figure out the from_lsn
automatically by specifying a parent backup with the --incremental-basedir option.
to_lsn
to_lsn
has the value of the log sequence number (LSN) of the last checkpoint in the InnoDB redo log. This is internally used by mariadb-backup when preparing incremental backups.
last_lsn
last_lsn
has the value of the last log sequence number (LSN) read from the InnoDB redo log. This is internally used by mariadb-backup when preparing incremental backups.
xtrabackup_info
The xtrabackup_info
file contains information about the backup. The fields in this file are listed below.
If the --extra-lsndir option is provided, then an extra copy of this file will be saved in that directory.
uuid
If a UUID was provided by the --incremental-history-uuid option, then it will be saved here. Otherwise, this will be the empty string.
name
If a name was provided by the --history or the ---incremental-history-name options, then it will be saved here. Otherwise, this will be the empty string.
tool_name
The name of the mariadb-backup executable that performed the backup. This is generally mariadb-backup
.
tool_command
The arguments that were provided to mariadb-backup when it performed the backup.
tool_version
The version of mariadb-backup that performed the backup.
ibbackup_version
The version of mariadb-backup that performed the backup.
server_version
The version of MariaDB Server that was backed up.
start_time
The time that the backup started.
end_time
The time that the backup ended.
lock_time
The amount of time that mariadb-backup held its locks.
binlog_pos
This field stores the binary log file name and position that corresponds to the backup.
This field also stores the value of the gtid_current_pos system variable that correspond to the backup.
The values in this field are only guaranteed to be consistent with the backup if the --no-lock option was not provided when the backup was taken.
innodb_from_lsn
This is identical to from_lsn
in xtrabackup_checkpoints.
If the backup is a full backup, then innodb_from_lsn
has the value of 0
.
If the backup is an incremental backup, then innodb_from_lsn
has the value of the log sequence number (LSN) at which the backup started reading from the InnoDB redo log.
innodb_to_lsn
This is identical to to_lsn
in xtrabackup_checkpoints.
innodb_to_lsn
has the value of the log sequence number (LSN) of the last checkpoint in the InnoDB redo log.
partial
If the backup is a partial backup, then this value will be Y
.
Otherwise, this value will be N
.
incremental
If the backup is an incremental backup, then this value will be Y
.
Otherwise, this value will be N
.
format
This field's value is the format of the backup.
If the --stream option was set to xbstream
, then this value will be xbstream
.
If the --stream option was not provided, then this value will be file
.
compressed
If the --compress option was provided, then this value will be compressed
.
Otherwise, this value will be N
.
xtrabackup_slave_info
If the --slave-info option is provided, then this file contains the CHANGE MASTER command that can be used to set up a new server as a slave of the original server's master after the backup has been restored.
mariadb-backup does not check if GTIDs are being used in replication. It takes a shortcut and assumes that if the gtid_slave_pos system variable is non-empty, then it writes the CHANGE MASTER command with the MASTER_USE_GTID option set to slave_pos
. Otherwise, it writes the CHANGE MASTER command with the MASTER_LOG_FILE and MASTER_LOG_POS options using the master's binary log file and position. See for more information.
xtrabackup_galera_info
If the --galera-info option is provided, then this file contains information about a Galera Cluster node's state.
The file contains the values of the and status variables.
The values are written in the following format:
For example:
<table>.delta
If the backup is an incremental backup, then this file contains changed pages for the table.
<table>.delta.meta
If the backup is an incremental backup, then this file contains metadata about <table>.delta
files. The fields in this file are listed below.
page_size
This field contains either the value of innodb_page_size or the value of the KEY_BLOCK_SIZE table option for the table if the ROW_FORMAT table option for the table is set to COMPRESSED.
zip_size
If the ROW_FORMAT table option for this table is set to COMPRESSED, then this field contains the value of the compressed page size.
space_id
This field contains the value of the table's space_id
.
This page is licensed: CC BY-SA / Gnu FDL
The has the following limitations.
InnoDB tables can have a maximum of 1,017 columns. This includes .
InnoDB tables can have a maximum of 64 secondary indexes.
A multicolumn index on InnoDB can use a maximum of 32 columns. If you attempt to create a multicolumn index that uses more than 32 columns, MariaDB returns an Error 1070.
With the exception of variable-length columns (that is, , , and ), rows in InnoDB have a maximum length of roughly half the page size for 4KB, 8KB, 16KB and 32KB page sizes.
The maximum size for and columns is 4GB. This also applies to and .
MariaDB imposes a row-size limit of 65,535 bytes for the combined sizes of all columns. If the table contains or columns, these only count for 9 - 12 bytes in this calculation, given that their content is stored separately.
32-bit operating systems have a maximum file size limit of 2GB. When working with large tables using this architecture, configure InnoDB to use smaller data files.
The maximum size for the combined InnoDB log files is 512GB.
With tablespaces, the minimum size is 10MB, the maximum varies depending on the InnoDB Page Size.
Using the system variable, you can configure the size in bytes for InnoDB pages. Pages default to 16KB. There are certain limitations on how you use this variable.
MariaDB instances using one page size cannot use data files or log files from an instance using a different page size.
When using a Page Size of 4KB or 8KB, the maximum index key length is lowered proportionately.
InnoDB has the following table-specific limitations.
When you issue a statement, InnoDB doesn't regenerate the table, rather it deletes each row from the table one by one.
When running MariaDB on Windows, InnoDB stores databases and tables in lowercase. When moving databases and tables in a binary format from Windows to a Unix-like system or from a Unix system to Windows, you need to rename these to use lowercase.
When using cascading , operations in the cascade don't activate triggers.
When running twice on a table in which statements or transactions are running, MariaDB blocks the second until the statement or transaction is complete. This occurs because the statement or transaction blocks the second statement from reloading the table definition, which it must do since the old one was marked as obsolete after the first statement.
statements do not provide accurate statistics for InnoDB, except for the physical table size.
The InnoDB storage engine does not maintain internal row counts. Transactions isolate writes, which means that concurrent transactions will not have the same row counts.
When defining an index on an auto-incrementing column, it must be defined in a way that allows the equivalent of SELECT MAX(col)
lookups on the table.
Restarting MariaDB may cause InnoDB to reuse old auto-increment values, such as in the case of a transaction that was rolled back.
When auto-incrementing columns run out of values, statements generate duplicate-key errors.
You can modify data on a maximum of 96 * 1023 concurrent transactions that generate undo records.
Of the 128 rollback segments, InnoDB assigns 32 to non-redo logs for transactions that modify temporary tables and related objects, reducing the maximum number of concurrent data-modifying transactions to 96,000, from 128.000.
The limit is 32,000 concurrent transactions when all data-modifying transactions also modify temporary tables.
Issuing a statement sets two locks on each table when the system variable is enabled (the default).
When you commit or roll back a transaction, any locks set in the transaction are released. You don't need to issue statements when the variable is enabled, as InnoDB would immediately release the table locks.
This page is licensed: CC BY-SA / Gnu FDL
The change buffer has been disabled by default from , , and (), was deprecated and ignored from (), and was removed in (). Benchmarks attached to show that the change buffer sometimes reduces performance, and in the best case seem to bring a few per cent improvement to throughput. However, such improvement could come with a price: If the buffered changes are never merged (, motivated by the reduction of random crashes and the removal of an innodb_force_recovery option that can inflict further corruption), then the InnoDB system tablespace can grow out of control ().
INSERT, UPDATE and DELETE statements can be particularly heavy operations to perform, as all indexes need to be updated after each change. For this reason these changes are often buffered.
Pages are modified in the , and not immediately on disk. After all the records that cover the changes to a data page have been written to the InnoDB redo log, the changed page may be written (''flushed'') to a data file. Pages that have been modified in memory and not yet flushed are called dirty pages.
The Change Buffer is an optimization that allows some data to be modified even though the data page does not exist in the buffer pool. Instead of modifying the data in its final destination, we would insert a record into a special Change Buffer that resides in the system tablespace. When the page is read into the buffer pool for any reason, the buffered changes will be applied to it.
The Change Buffer only contains changes to secondary index leaf pages.
Before , only inserted rows could be buffered, so this buffer was called Insert Buffer. The old name still appears in several places, for example in the output of .
Inserts to UNIQUE secondary indexes cannot be buffered unless is used. This may sometimes allow duplicates to be inserted into the UNIQUE secondary index. Much of the time, the UNIQUE constraint would be checked because the change buffer could only be used if the index page is not located in the buffer pool.
When rows are deleted, a flag is set, thus rows are not immediately deleted. Delete-marked records may be purged after the transaction has been committed and any read views that were created before the commit have been closed. Delete-mark and purge buffering of any secondary indexes is allowed.
ROLLBACK never makes use of the change buffer; it would force a merge of any changes that were buffered during the execution of the transaction.
The Change Buffer is an optimization because:
Some random-access page reads will be transformed into modifications of change buffer pages.
A change buffer page can be modified several times in memory and be flushed to disk only once.
Dirty pages are flushed together, so the number of IO operations is lower.
If the server crashes or is shut down, the Change Buffer might not be empty. The Change Buffer resides in the InnoDB system tablespace, covered by the write-ahead log, so they can be applied at server restart. A shutdown with will merge all buffered changes.
Starting with , there no longer is a background task that would merge the change buffer to the secondary index pages. The changes would only be merged on demand.
The Change Buffer was removed in because it has been a prominent source of corruption bugs that have been extremely hard to reproduce.
The main server system variable here is , which determines which form of change buffering, if any, to use.
The following settings are available:
inserts
Only buffer insert operations
deletes
Only buffer delete operations
changes
Buffer both insert and delete operations
purges
Buffer the actual physical deletes that occur in the background
all
Buffer inserts, deletes and purges. Default setting from until , , and .
none
Don't buffer any operations. Default from , , and .
Modifying the value of this variable only affects the buffering of new operations. The merging of already buffered changes is not affected.
The system variable determines the maximum size of the change buffer, expressed as a percentage of the buffer pool.
This page is licensed: CC BY-SA / Gnu FDL
mariadb-dump db_name > backup-file.sql
mariadb db_name < backup-file.sql
mariadb-hotcopy db_name [/path/to/new_directory]
mariadb-hotcopy db_name_1 ... db_name_n /path/to/new_directory
InnoDB Page Size
Maximum Tablespace Size
4KB
16TB
8KB
32TB
16KB
64TB
32KB
128TB
64KB
256TB
InnoDB Page Size
Index Key Length
4KB
768B
8KB
1536B
16KB
3072B
mariadb-dump -u root -p --all-databases --no-data > nodata.sql
sed -n '/Current Database: `DATABASENAME`/, /Current Database:/p' nodata.sql > trimednodata.sql
vim trimednodata.sql
mariadb-backup --prepare --export --target-dir=/media/backups/fullbackupfolder
Mysql -u root -p schema_name < nodata.sql
SELECT ...
INTO OUTFILE '/tmp/filename.SQL'
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
FROM ...
USE information_schema;
SELECT concat("ALTER TABLE ",table_name," DISCARD TABLESPACE;") AS discard_tablespace
FROM information_schema.tables
WHERE TABLE_SCHEMA="DATABASENAME";
SELECT concat("ALTER TABLE ",table_name," IMPORT TABLESPACE;") AS import_tablespace
FROM information_schema.tables
WHERE TABLE_SCHEMA="DATABASENAME";
SELECT
concat ("ALTER TABLE ", rc.CONSTRAINT_SCHEMA, ".",rc.TABLE_NAME," DROP FOREIGN KEY ", rc.CONSTRAINT_NAME,";") AS drop_keys
FROM REFERENTIAL_CONSTRAINTS AS rc
WHERE CONSTRAINT_SCHEMA = 'DATABASENAME';
SELECT
CONCAT ("ALTER TABLE ",
KCU.CONSTRAINT_SCHEMA, ".",
KCU.TABLE_NAME,"
ADD CONSTRAINT ",
KCU.CONSTRAINT_NAME, "
FOREIGN KEY ", "
(`",KCU.COLUMN_NAME,"`)", "
REFERENCES `",REFERENCED_TABLE_NAME,"`
(`",REFERENCED_COLUMN_NAME,"`)" ,"
ON UPDATE " ,(SELECT UPDATE_RULE FROM REFERENTIAL_CONSTRAINTS WHERE CONSTRAINT_NAME = KCU.CONSTRAINT_NAME AND CONSTRAINT_SCHEMA = KCU.CONSTRAINT_SCHEMA),"
ON DELETE ",(SELECT DELETE_RULE FROM REFERENTIAL_CONSTRAINTS WHERE CONSTRAINT_NAME = KCU.CONSTRAINT_NAME AND CONSTRAINT_SCHEMA = KCU.CONSTRAINT_SCHEMA),";") AS add_keys
FROM KEY_COLUMN_USAGE AS KCU
WHERE KCU.CONSTRAINT_SCHEMA = 'DATABASENAME'
AND KCU.POSITION_IN_UNIQUE_CONSTRAINT >= 0
AND KCU.CONSTRAINT_NAME NOT LIKE 'PRIMARY';
ALTER TABLE schemaname.tablename DROP FOREIGN KEY key_name;
...
ALTER TABLE test DISCARD TABLESPACE;
...
cp *.cfg /var/lib/mysql
cp *.ibd /var/lib/mysql
sudo chown -R mysql:mysql /var/lib/mysql
ALTER TABLE test IMPORT TABLESPACE;
...
ALTER TABLE schmeaname.tablename ADD CONSTRAINT key_name FOREIGN KEY (`column_name`) REFERENCES `foreign_table` (`colum_name`) ON UPDATE NO ACTION ON DELETE NO ACTION;
...
USE DATABASE
SELECT * FROM test LIMIT 10;
mariadb-dump -u user -p --single-transaction --master-data=2 > fullbackup.sql
$ mariadb-backup --backup \
--slave-info --safe-slave-backup \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword
ALTER TABLE schemaname.tablename DROP FOREIGN KEY key_name;
...
ALTER TABLE test DISCARD TABLESPACE;
...
cp *.cfg /var/lib/mysql
cp *.ibd /var/lib/mysql
sudo chown -R mysql:mysql /var/lib/mysql
ALTER TABLE test IMPORT TABLESPACE;
...
ALTER TABLE schmeaname.tablename ADD CONSTRAINT key_name FOREIGN KEY (`column_name`) REFERENCES `foreign_table` (`colum_name`) ON UPDATE NO ACTION ON DELETE NO ACTION;
...
mariadb-bin.000096 568 0-1-2
backup_type = full-backuped
from_lsn = 0
to_lsn = 1635102
last_lsn = 1635102
recover_binlog_info = 0
wsrep_local_state_uuid:wsrep_last_committed
d38587ce-246c-11e5-bcce-6bbd0831cc0f:1352215
A Stored Procedure is a routine invoked with a CALL statement. It may have input parameters, output parameters and parameters that are both input parameters and output parameters.
Here's a skeleton example to see a stored procedure in action:
DELIMITER //
CREATE PROCEDURE Reset_animal_count()
MODIFIES SQL DATA
UPDATE animal_count SET animals = 0;
//
DELIMITER ;
First, the delimiter is changed, since the function definition will contain the regular semicolon delimiter. The procedure is named Reset_animal_count
. MODIFIES SQL DATA
indicates that the procedure will perform a write action of sorts, and modify data. It's for advisory purposes only. Finally, there's the actual SQL statement - an UPDATE.
SELECT * FROM animal_count;
+---------+
| animals |
+---------+
| 101 |
+---------+
CALL Reset_animal_count();
SELECT * FROM animal_count;
+---------+
| animals |
+---------+
| 0 |
+---------+
A more complex example, with input parameters, from an actual procedure used by banks:
CREATE PROCEDURE
Withdraw /* Routine name */
(parameter_amount DECIMAL(6,2), /* Parameter list */
parameter_teller_id INTEGER,
parameter_customer_id INTEGER)
MODIFIES SQL DATA /* Data access clause */
BEGIN /* Routine body */
UPDATE Customers
SET balance = balance - parameter_amount
WHERE customer_id = parameter_customer_id;
UPDATE Tellers
SET cash_on_hand = cash_on_hand + parameter_amount
WHERE teller_id = parameter_teller_id;
INSERT INTO Transactions VALUES (
parameter_customer_id,
parameter_teller_id,
parameter_amount);
END;
See CREATE PROCEDURE for full syntax details.
Security is a key reason. Banks commonly use stored procedures so that applications and users don't have direct access to the tables. Stored procedures are also useful in an environment where multiple languages and clients are all used to perform the same operations.
To find which stored functions are running on the server, use SHOW PROCEDURE STATUS.
SHOW PROCEDURE STATUS\G
*************************** 1. row ***************************
Db: test
Name: Reset_animal_count
Type: PROCEDURE
Definer: root@localhost
Modified: 2013-06-03 08:55:03
Created: 2013-06-03 08:55:03
Security_type: DEFINER
Comment:
character_set_client: utf8
collation_connection: utf8_general_ci
Database Collation: latin1_swedish_ci
or query the routines table in the INFORMATION_SCHEMA database directly:
SELECT ROUTINE_NAME FROM INFORMATION_SCHEMA.ROUTINES
WHERE ROUTINE_TYPE='PROCEDURE';
+--------------------+
| ROUTINE_NAME |
+--------------------+
| Reset_animal_count |
+--------------------+
To find out what the stored procedure does, use SHOW CREATE PROCEDURE.
SHOW CREATE PROCEDURE Reset_animal_count\G
*************************** 1. row ***************************
Procedure: Reset_animal_count
sql_mode:
Create Procedure: CREATE DEFINER=`root`@`localhost` PROCEDURE `Reset_animal_count`()
MODIFIES SQL DATA
UPDATE animal_count SET animals = 0
character_set_client: utf8
collation_connection: utf8_general_ci
Database Collation: latin1_swedish_ci
To drop a stored procedure, use the DROP PROCEDURE statement.
DROP PROCEDURE Reset_animal_count();
To change the characteristics of a stored procedure, use ALTER PROCEDURE. However, you cannot change the parameters or body of a stored procedure using this statement; to make such changes, you must drop and re-create the procedure using CREATE OR REPLACE PROCEDURE (which retains existing privileges), or DROP PROCEDURE followed CREATE PROCEDURE .
See the article Stored Routine Privileges.
This page is licensed: CC BY-SA / Gnu FDL
When a transaction writes data, it always inserts them in the table indexes or data (in the buffer pool or in physical files). No private copies are created. The old versions of data being modified by active InnoDB transactions are stored in the undo log. The original data can then be restored, or viewed by a consistent read.
Before a row is modified, a diff is copied into the undo log. Each normal row contains a pointer to the most recent version of the same row in the undo log. Each row in the undo log contains a pointer to previous version, if any. So, each modified row has a history chain.
Rows are never physically deleted until a transaction ends. If they were deleted, the restore in ROLLBACK would be impossible. Thus, rows are simply marked for deletion.
Each transaction uses a view of the records. The transaction isolation level determines how this view is created. For example, READ UNCOMMITTED usually uses the current version of rows, even if they are not committed (dirty reads). Other isolation levels require that the most recent committed version of rows is searched in the undo log. READ COMMITTED uses a different view for each table, while REPEATABLE READ and SERIALIZABLE use the same view for all tables.
There is also a global history list of the data. When a transaction is committed, its history is added to this history list. The order of the list is the chronological order of the commits.
The purge thread deletes the rows in the undo log which are not needed by any existing view. The rows for which a most recent version exists are deleted, as well as the delete-marked rows.
If InnoDB needs to restore an old version, it will simply replace the newer version with the older one. When a transaction inserts a new row, there is no older version. However, in that case, the restore can be done by deleting the inserted rows.
Understanding how the undo log works helps with understanding the negative effects long transactions.
Long transactions generate several old versions of the rows in the undo log. Those rows will probably be needed for a longer time, because other long transactions will need them. Since those transactions will generate more modified rows, a sort of combinatorial explosion can be observed. Thus, the undo log requires more space.
Transaction may need to read very old versions of the rows in the history list, thus their performance will degrade.
Of course read-only transactions do not write more entries in the undo log; however, they delay the purging of existing entries.
Also, long transactions can more likely result in deadlocks, but this problem is not related to the undo log.
Feature
Detail
Resources
Transaction Log
InnoDB Undo Log
Storage Engine
InnoDB
Purpose
Multi-Version Concurrency Control (MVCC)
Availability
All ES and CS versions
Location
By default, located in InnoDB system tablespace When
innodb_undo_tablespaces
is set, located in directory set by (Defaults to )
Quantity
Set by
Size
10 MB per tablespace by default (grows as needed)
System variables affecting undo logs include:
The undo log is not a log file that can be viewed on disk in the usual sense, such as the error log or slow query log, but rather an area of storage.
Before , the undo log is usually part of the physical system tablespace, but from , the innodb_undo_directory and innodb_undo_tablespaces system variables can be used to split into different tablespaces and store in a different location (perhaps on a different storage device). From , multiple undo tablespaces are enabled by default, and the innodb_undo_tablespaces default is changed to 3 so that the space occupied by possible bursts of undo log records can be reclaimed after innodb_undo_log_truncate is set.
Each insert or update portion of the undo log is known as a rollback segment. The innodb_undo_logs system variable allowed to reduce the number of rollback segments from the usual 128, to limit the number of concurrently active write transactions. innodb_undo_logs was deprecated and ignored in and removed in , as it always makes sense to use the maximum number of rollback segments.
The related innodb_available_undo_logs status variable stores the total number of available InnoDB undo logs.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB features pluggable storage engines to allow per-table workload optimization.
A storage engine is a type of for MariaDB:
Different storage engines may be optimized for different workloads, such as transactional workloads, analytical workloads, or high throughput workloads.
Different storage engines may be designed for different use cases, such as federated table access, table sharding, and table archiving in the cloud.
Different tables on the same server may use different storage engines.
Identify the server's global default storage engine by using to query the system variable:
Identify the session's default storage engine by using :
Global default storage engine:
Session default storage engine supersedes global default during this session:
Storage engine is specified at time of table creation using a ENGINE = parameter.
Standard MariaDB storage engines are used for System Table storage:
Yes, different tables can use different storage engines on the same server.
To create a table with a specific storage engine, specify the ENGINE table option to the statement.
Yes, a single query can reference tables that use multiple storage engines.
In some cases, special configuration may be required. For example, ColumnStore requires cross engine joins to be configured.
is the recommended storage engine for transactional or OLTP workloads.
is the recommended storage engine for analytical or OLAP workloads.
An application that performs both transactional and analytical queries is known as .
HTAP can be implemented with MariaDB by using for transactional queries and for analytical queries.
.
, which shows available storage engines.
, which shows storage engine by table.
This page is: Copyright © 2025 MariaDB. All rights reserved.
If you are completely new to MariaDB and relational databases, you may want to start with the . Also, make sure you understand the connection parameters discussed in the article.
There are a number of common problems that can occur when connecting to MariaDB.
If the error you get is something like:
or
the server is either not running, or not running on the specified port, socket or pipe. Make sure you are using the correct host, port, pipe, socket and protocol options, or alternatively, see , or .
The socket file can be in a non-standard path. In this case, the socket
option is probably written in the my.cnf file. Check that its value is identical in the [mysqld] and [client] sections; if not, the client will look for a socket in a wrong place.
If unsure where the Unix socket file is running, it's possible to find this out, for example:
Usually, the MariaDB server does not by default accept connections from a remote client or connecting with tcp and a hostname and has to be configured to permit these.
To solve this, see
The is enabled by default on Unix-like systems. This uses operating system credentials when connecting to MariaDB via the local Unix socket file. See for instructions on connecting and on switching to password-based authentication as well as for an overview.
Authentication is granted to a particular username/host combination. user1'@'localhost'
, for example, is not the same as user1'@'166.78.144.191'
. See the article for details on granting permissions.
Passwords are hashed with function. If you have set a password with the statement, the function must be used at the same time. For example, SET PASSWORD FOR 'bob'@'%.loc.gov' = PASSWORD('newpass')
rather than just SET PASSWORD FOR 'bob'@'%.loc.gov' = 'newpass'
;
If you can run regular queries, but get an authentication error when running the , or statements, you do not have permission to write files to the server. This requires the FILE privilege. See the article.
If you can connect to the server, but not to a database, for example:
or can connect to a particular database, but not another, for examplemariadb -uname -p -u name db1
works but not mariadb -uname -p -u name db2
, you have not been granted permission for the particular database. See the article.
It's possible that option files or environment variables may be providing incorrect connection parameters. Check the values provided in any option files read by the client you are using (see and the documentation for the particular client you're using - see ).
Option files can usually be suppressed with no-defaults
option, for example:
If you are unable to connect to a server, for example because you have lost the root password, you can start the server without using the privilege tables by running the option, which gives users full access to all tables. You can then run to resume using the grant tables, followed by to change the password for an account.
You may have created a user with something like:
This creates a user with the '%' wildcard host.
However, you may still be failing to login from localhost. Some setups create anonymous users, including localhost. So the following records exist in the user table:
Since you are connecting from localhost, the anonymous credentials, rather than those for the 'melisa' user, are used. The solution is either to add a new user specific to localhost, or to remove the anonymous localhost user.
CC BY-SA / Gnu FDL
MariaDB has many built-in functions that can be used to manipulate strings of data. With these functions, one can format data, extract certain characters, or use search expressions. Good developers should be aware of the string functions that are available. Therefore, in this article we will go through several string functions, grouping them by similar features, and provide examples of how they might be used.
There are several string functions that are used to format text and numbers for nicer display. A popular and very useful function for pasting together the contents of data fields with text is the CONCAT() function. As an example, suppose that a table called contacts has a column for each sales contact's first name and another for the last name. The following SQL statement would put them together:
SELECT CONCAT(name_first, ' ', name_last)
AS Name
FROM contacts;
This statement will display the first name, a space, and then the last name together in one column. The AS clause will change the column heading of the results to Name.
A less used concatenating function is CONCAT_WS(). It will put together columns with a separator between each. This can be useful when making data available for other programs. For instance, suppose we have a program that will import data, but it requires the fields to be separated by vertical bars. We could just export the data, or we could use a SELECT statement like the one that follows in conjunction with an interface written with an API language like Perl:
SELECT CONCAT_WS('|', col1, col2, col3)
FROM table1;
The first element above is the separator. The remaining elements are the columns to be strung together.
If we want to format a long number with commas every three digits and a period for the decimal point (e.g., 100,000.00), we can use the function FORMAT() like so:
SELECT CONCAT('$', FORMAT(col5, 2))
FROM table3;
In this statement, the CONCAT() will place a dollar sign in front of the numbers found in the col5
column, which will be formatted with commas by FORMAT(). The 2
within the FORMAT() stipulates two decimal places.
Occasionally, one will want to convert the text from a column to either all upper-case letters or all lower-case letters. In the example that follows, the output of the first column is converted to upper-case and the second to lower-case:
SELECT UCASE(col1),
LCASE(col2)
FROM table4;
When displaying data in forms, it's sometimes useful to pad the data displayed with zeros or dots or some other filler. This can be necessary when dealing with VARCHAR columns where the width varies to help the user to see the column limits. There are two functions that may be used for padding: LPAD() and RPAD().
SELECT RPAD(part_nbr, 8, '.') AS 'Part Nbr.',
LPAD(description, 15, '_') AS Description
FROM catalog;
In this SQL statement, dots are added to the right end of each part number. So a part number of "H200" will display as "H200....", but without the quotes. Each part's description will have under-scores preceding it. A part with a description of "brass hinge" will display as "brass hinge".
If a column is a CHAR data-type, a fixed width column, then it may be necessary to trim any leading or trailing spaces from displays. There are a few functions to accomplish this task. The LTRIM() function will eliminate any leading spaces to the left. So "H200
" becomes "H200
". For columns with trailing spaces, spaces on the right, RTRIM() will work: "H500
" becomes "H500
". A more versatile trimming function, though, is TRIM(). With it one can trim left, right or both. Below are a few examples:
SELECT TRIM(LEADING '.' FROM col1),
TRIM(TRAILING FROM col2),
TRIM(BOTH '_' FROM col3),
TRIM(col4)
FROM table5;
In the first TRIM() clause, the padding component is specified; the leading dots are to be trimmed from the output of col1
. The trailing spaces will be trimmed off of col2
—space is the default. Both leading and trailing under-scores are trimmed from col3
above. Unless specified, BOTH is the default. So leading and trailing spaces are trimmed from col4
in the statement here.
When there is a need to extract specific elements from a column, MariaDB has a few functions that can help. Suppose a column in the table contacts contains the telephone numbers of sales contacts, including the area-codes, but without any dashes or parentheses. The area-code of each could be extracted for sorting with the LEFT() and the telephone number with the RIGHT() function.
SELECT LEFT(telephone, 3) AS area_code,
RIGHT(telephone, 7) AS tel_nbr
FROM contacts
ORDER BY area_code;
In the LEFT() function above, the column telephone is given along with the number of characters to extract, starting from the first character on the left in the column. The RIGHT() function is similar, but it starts from the last character on the right, counting left to capture, in this statement, the last seven characters. In the SQL statement above, area_code is reused to order the results set. To reformat the telephone number, it will be necessary to use the SUBSTRING() function.
SELECT CONCAT('(', LEFT(telephone, 3), ') ',
SUBSTRING(telephone, 4, 3), '-',
MID(telephone, 7)) AS 'Telephone Number'
FROM contacts
ORDER BY LEFT(telephone, 3);
In this SQL statement, the CONCAT() function is employed to assemble some characters and extracted data to produce a common display for telephone numbers (e.g., (504) 555-1234). The first element of the CONCAT() is an opening parenthesis. Next, a LEFT() is used to get the first three characters of telephone, the area-code. After that a closing parenthesis, along with a space is added to the output. The next element uses the SUBSTRING() function to extract the telephone number's prefix, starting at the fourth position, for a total of three characters. Then a dash is inserted into the display. Finally, the function MID() extracts the remainder of the telephone number, starting at the seventh position. The functions MID() and SUBSTRING() are interchangeable and their syntax are the same. By default, for both functions, if the number of characters to capture isn't specified, then it's assumed that the remaining ones are to be extracted.
There are a few functions in MariaDB that can help in manipulating text. One such function is REPLACE(). With it every occurrence of a search parameter in a string can be replaced. For example, suppose we wanted to replace the title Mrs. with Ms. in a column containing the person's title, but only in the output. The following SQL would do the trick:
SELECT CONCAT(REPLACE(title, 'Mrs.', 'Ms.'),
' ', name_first, ' ', name_last) AS Name
FROM contacts;
We're using the ever handy CONCAT() function to put together the contact's name with spaces. The REPLACE() function extracts each title and replaces Mrs. with Ms., where applicable. Otherwise, for all other titles, it displays them unchanged.
If we want to insert or replace certain text from a column (but not all of its contents), we could use the INSERT() function in conjunction with the LOCATE() function. For example, suppose another contacts table has the contact's title and full name in one column. To change the occurrences of Mrs. to Ms., we could not use REPLACE() since the title is embedded in this example. Instead, we would do the following:
SELECT INSERT(name, LOCATE(name, 'Mrs.'), 4, 'Ms.')
FROM contacts;
The first element of the INSERT() function is the column. The second element which contains the LOCATE() is the position in the string that text is to be inserted. The third element is optional; it states the number of characters to overwrite. In this case, Mrs. which is four characters is overwritten with Ms. (the final element), which is only three. Incidentally, if 0 is specified, then nothing is overwritten, text is inserted only. As for the LOCATE() function, the first element is the column and the second the search text. It returns the position within the column where the text is found. If it's not found, then 0 is returned. A value of 0 for the position in the INSERT() function negates it and returns the value of name unchanged.
On the odd chance that there is a need to reverse the content of a column, there's the REVERSE() function. You would just place the column name within the function. Another minor function is the REPEAT() function. With it a string may be repeated in the display:
SELECT REPEAT(col1, 2)
FROM table1;
The first component of the function above is the string or column to be repeated. The second component states the number of times it's to be repeated.
The function CHAR_LENGTH() is used to determine the number of characters in a string. This could be useful in a situation where a column contains different types of information of specific lengths. For instance, suppose a column in a table for a college contains identification numbers for students, faculty, and staff. If student identification numbers have eight characters while others have less, the following will count the number of student records:
SELECT COUNT(school_id)
AS 'Number of Students'
FROM table8
WHERE CHAR_LENGTH(school_id)=8;
The COUNT() function above counts the number of rows that meet the condition of the WHERE
clause.
In a SELECT statement, an ORDER BY clause can be used to sort a results set by a specific column. However, if the column contains IP addresses, a simple sort may not produce the desired results:
SELECT ip_address
FROM computers WHERE server='Y'
ORDER BY ip_address LIMIT 3;
+-------------+
| ip_address |
+-------------+
| 10.0.1.1 |
| 10.0.11.1 |
| 10.0.2.1 |
+-------------+
In the limited results above, the IP address 10.0.2.1 should be second. This happens because the column is being sorted lexically and not numerically. The function INET_ATON() will solve this sorting problem.
SELECT ip_address
FROM computers WHERE server='Y'
ORDER BY INET_ATON(ip_address) LIMIT 3;
Basically, the INET_ATON() function will convert IP addresses to regular numbers for numeric sorting. For instance, if we were to use the function in the list of columns in a SELECT statement, instead of the WHERE
clause, the address 10.0.1.1 would return 167772417, 10.0.11.1 will return 167774977, and 10.0.2.1 the number 167772673. As a complement to INET_ATON(), the function INET_NTOA() will translate these numbers back to their original IP addresses.
MariaDB is fairly case insensitive, which usually is fine. However, to be able to check by case, the STRCMP() function can be used. It converts the column examined to a string and makes a comparison to the search parameter.
SELECT col1, col2
FROM table6
WHERE STRCMP(col3, 'text')=0;
If there is an exact match, the function STRCMP() returns 0. So if col3
here contains "Text", it won't match. Incidentally, if col3
alphabetically is before the string to which it's compared, a -1
will be returned. If it's after it, a 1
is returned.
When you have list of items in one string, the SUBSTRING_INDEX() can be used to pull out a sub-string of data. As an example, suppose we have a column which has five elements, but we want to retrieve just the first two elements. This SQL statement will return them:
SELECT SUBSTRING_INDEX(col4, '|', 2)
FROM table7;
The first component in the function above is the column or string to be picked apart. The second component is the delimiter. The third is the number of elements to return, counting from the left. If we want to grab the last two elements, we would use a negative two to instruct MariaDB to count from the right end.
There are more string functions available in MariaDB. A few of the functions mentioned here have aliases or close alternatives. There are also functions for converting between ASCII, binary, hexi-decimal, and octal strings. And there are also string functions related to text encryption and decryption that were not mentioned. However, this article has given you a good collection of common string functions that will assist you in building more powerful and accurate SQL statements.
This page is licensed: CC BY-SA / Gnu FDL
Engine
Target
Optimization
Availability
Read-Heavy
Reads
ES 10.5+
Analytics, HTAP
Big Data, Analytical
ES 10.5+
General Purpose
Mixed Read/Write
ES 10.5+
Cache, Temp
Temporary Data
ES 10.5+
Reads
Reads
ES 10.5+
Write-Heavy
I/O Reduction, SSD
ES 10.5+
Cloud
Read-Only
ES 10.5+
Federation
Sharding, Interlink
ES 10.5+
SHOW GLOBAL VARIABLES LIKE 'default_storage_engine';
+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+
SHOW SESSION VARIABLES LIKE 'default_storage_engine';
+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+
SET GLOBAL default_storage_engine='MyRocks';
SET SESSION default_storage_engine='MyRocks';
[mariadb]
...
default_storage_engine=MyRocks
SHOW ENGINES;
CREATE TABLE accounts.messages (
id INT PRIMARY KEY AUTO_INCREMENT,
sender_id INT,
receiver_id INT,
message TEXT
) ENGINE = MyRocks;
mariadb -uname -p -uname -p
ERROR 2002 (HY000): Can't connect to local MySQL server through
socket '/var/run/mysqld/mysqld.sock' (2 "No such file or directory")
mariadb -uname -p --port=3307 --protocol=tcp
ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost'
(111 "Connection refused")
netstat -ln | grep mysqld
unix 2 [ ACC ] STREAM LISTENING 33209505 /var/run/mysqld/mysqld.sock
(/my/maria-10.4) ./client/mysql --host=myhost --protocol=tcp --port=3306 test
ERROR 2002 (HY000): Can't connect to MySQL server on 'myhost' (115)
(/my/maria-10.4) telnet myhost 3306
Trying 192.168.0.11...
telnet: connect to address 192.168.0.11: Connection refused
(/my/maria-10.4) perror 115
OS error code 115: Operation now in progress
USE test;
ERROR 1044 (42000): Access denied for user 'ian'@'localhost' to database 'test'
mariadb-import --no-defaults ...
CREATE USER melisa identified BY 'password';
SELECT user,host FROM mysql.user WHERE user='melisa';
+--------+------+
| user | host |
+--------+------+
| melisa | % |
+--------+------+
SELECT user,host FROM mysql.user WHERE user='melisa' OR user='';
+--------+-----------+
| user | host |
+--------+-----------+
| melisa | % |
| | localhost |
+--------+-----------+
A high-level overview of the main reasons for choosing a particular storage engine:
InnoDB is a good general transaction storage engine, and the best choice in most cases. It is the default storage engine.
MyISAM has a small footprint and allows for easy copying between systems. MyISAM is MySQL's oldest storage engine. There is usually little reason to use it except for legacy purposes. Aria is MariaDB's more modern improvement.
XtraDB is no longer available. It was a performance-enhanced fork of InnoDB and was MariaDB's default engine until .
When you want to split your database load on several servers or optimize for scaling. We also suggest looking at Galera, a synchronous multi-master cluster.
Spider uses partitioning to provide data sharding through multiple servers.
utilizes a massively parallel distributed data architecture and is designed for big data scaling to process petabytes of data.
TokuDB is a transactional storage engine which is optimized for workloads that do not fit in memory, and provides a good compression ratio. TokuDB has been deprecated by its upstream developers, and is disabled in , and removed in
MyRocks enables greater compression than InnoDB, as well as less write amplification giving better endurance of flash storage and improving overall throughput.
The Archive storage engine is, unsurprisingly, best used for archiving.
TokuDB is a transactional storage engine which is optimized for workloads that do not fit in memory, and provides a good compression ratio. TokuDB has been deprecated by its upstream developers, and is disabled in , and removed in
When you want to use data not stored in a MariaDB database.
CONNECT allows access to different kinds of text files and remote resources as if they were regular MariaDB tables.
The CSV storage engine can read and append to files stored in CSV (comma-separated-values) format. However, since , CONNECT is a better choice and is more flexibly able to read and write such files.
FederatedX uses libmysql to talk to the data source, the data source being a remote RDBMS. Currently, since FederatedX only uses libmysql, it can only talk to another MySQL RDBMS.
CassandraSE is a storage engine allowing access to an older version of Apache Cassandra NoSQL DBMS. It was relatively experimental, is no longer being actively developed and has been removed in .
Search engines optimized for search.
SphinxSE is used as a proxy to run statements on a remote Sphinx database server (mainly useful for advanced fulltext searches).
Mroonga provides fast CJK-ready full text searching using column store.
S3 Storage Engine is a read-only storage engine that stores its data in Amazon S3.
Sequence allows the creation of ascending or descending sequences of numbers (positive integers) with a given starting value, ending value and increment, creating virtual, ephemeral tables automatically when you need them.
The BLACKHOLE storage engine accepts data but does not store it and always returns an empty result. This can be useful in replication environments, for example, if you want to run complex filtering rules on a slave without incurring any overhead on a master.
OQGRAPH allows you to handle hierarchies (tree structures) and complex graphs (nodes having many connections in several directions).
The Archive storage engine is, unsurprisingly, best used for archiving.
Aria, MariaDB's more modern improvement on MyISAM, has a small footprint and allows for easy copy between systems.
The BLACKHOLE storage engine accepts data but does not store it and always returns an empty result. This can be useful in replication environments, for example, if you want to run complex filtering rules on a slave without incurring any overhead on a master.
CassandraSE is a storage engine allowing access to an older version of Apache Cassandra NoSQL DBMS. It was relatively experimental, is no longer being actively developed and has been removed in .
utilizes a massively parallel distributed data architecture and is designed for big data scaling to process petabytes of data.
CONNECT allows access to different kinds of text files and remote resources as if they were regular MariaDB tables.
The CSV storage engine can read and append to files stored in CSV (comma-separated-values) format. However, since , CONNECT is a better choice and is more flexibly able to read and write such files.
FederatedX uses libmysql to talk to the data source, the data source being a remote RDBMS. Currently, since FederatedX only uses libmysql, it can only talk to another MySQL RDBMS.
InnoDB is a good general transaction storage engine, and the best choice in most cases. It is the default storage engine.
The MERGE storage engine is a collection of identical MyISAM tables that can be used as one. "Identical" means that all tables have identical column and index information.
Mroonga provides fast CJK-ready full text searching using column store.
MyISAM has a small footprint and allows for easy copying between systems. MyISAM is MySQL's oldest storage engine. There is usually little reason to use it except for legacy purposes. Aria is MariaDB's more modern improvement.
MyRocks enables greater compression than InnoDB, as well as less write amplification giving better endurance of flash storage and improving overall throughput.
OQGRAPH allows you to handle hierarchies (tree structures) and complex graphs (nodes having many connections in several directions).
S3 Storage Engine is a read-only storage engine that stores its data in Amazon S3.
Sequence allows the creation of ascending or descending sequences of numbers (positive integers) with a given starting value, ending value and increment, creating virtual, ephemeral tables automatically when you need them.
SphinxSE is used as a proxy to run statements on a remote Sphinx database server (mainly useful for advanced fulltext searches).
Spider uses partitioning to provide data sharding through multiple servers.
TokuDB is a transactional storage engine which is optimized for workloads that do not fit in memory, and provides a good compression ratio. TokuDB has been deprecated by its upstream developers, and is disabled in , and removed in
XtraDB is no longer available. It was a performance-enhanced fork of InnoDB and was MariaDB's default engine until .
This page is licensed: CC BY-SA / Gnu FDL
This page is licensed: CC BY-SA / Gnu FDL
MariaDB Enterprise Server uses the InnoDB storage engine by default. InnoDB is a general purpose transactional storage engine that is performant, ACID-compliant, and well-suited for most workloads.
The InnoDB storage engine:
Is available with all versions of and MariaDB Community Server.
Is a general purpose storage engine.
Is transactional and well-suited for online transactional processing (OLTP) workloads.
Is ACID-compliant.
Performs well for mixed read-write workloads.
Supports online DDL.
Background Thread Pool
This page is: Copyright © 2025 MariaDB. All rights reserved.
When a transaction updates a row in an InnoDB table, InnoDB's MVCC implementation keeps old versions of the row in the . The old versions are kept at least until all transactions older than the transaction that updated the row are no longer open. At that point, the old versions can be deleted. InnoDB has purge process that is used to delete these old versions.
In MariaDB Enterprise Server, the InnoDB storage engine uses Purge Threads to perform garbage collection in the background. The Purge Threads are related to multi-version concurrency control (MVCC).
The Purge Threads perform garbage collection of various items:
The Purge Threads perform garbage collection of the . When a row is updated in the clustered index, InnoDB updates the values in the clustered index, and the old row version is added to the Undo Log. The Purge Threads scan the Undo Log for row versions that are not needed by open transactions and permanently delete them. In ES 10.5 and later, if the remaining clustered index record is the oldest possible row version, the Purge Thread resets the record's hidden DB_TRX_ID
field to 0.
The Purge Threads perform garbage collection of index records. When an indexed column is updated, InnoDB creates a new index record for the updated value in each affected index, and the old index records are delete-marked. When the primary key column is updated, InnoDB creates a new index record for the updated value in every index, and each old index record is delete-marked. The Purge Threads scan for delete-marked index records and permanently delete them.
The Purge Threads perform garbage collection of freed overflow pages. , , , , , and related types are sometimes stored on overflow pages. When the value on the overflow page is deleted or updated, the overflow page is no longer needed. The Purge Threads delete these freed overflow pages.
The number of purge threads can be set by configuring the system variable. This system variable can be specified as a command-line argument to or it can be specified in a relevant server in an . For example:
The purge batch size is defined as the number of records that must be written before triggering purge. The purge batch size can be set by configuring the system variable. This system variable can be specified as a command-line argument to or it can be specified in a relevant server in an . For example:
If purge operations are lagging on a busy server, then this can be a tough situation to recover from. As a solution, InnoDB allows you to set the max purge lag. The max purge lag is defined as the maximum number of that can be waiting to be purged from the history until InnoDB begins delaying DML statements.
The max purge lag can be set by configuring the system variable. This system variable can be changed dynamically with . For example:
This system variable can also be specified as a command-line argument to or it can be spec