Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Grasp the basics of using MariaDB Server. This section introduces fundamental concepts, common SQL commands, and essential operations to get you started with your database.
Learn how to connect to MariaDB Server. This section details various methods and tools for establishing secure and efficient connections to your database from different applications and environments.
Get an overview of MariaDB Backup. This section introduces the hot physical backup tool, explaining its capabilities for efficient and consistent backups of your MariaDB Server.
A beginner-friendly primer on using the mariadb command-line client to log in, create databases, and execute basic SQL commands.
This primer is designed to teach you the basics of getting information into and out of an existing MariaDB database using the command-line client program. It's not a complete reference and will not touch on any advanced topics. It is just a quick jump-start into using MariaDB.
Log into your MariaDB server from the command-line like so:
Replace user_name with your database username. Replace ip_address with the host name or address of your server. If you're accessing MariaDB from the same server you're logged into, then don't include -h and the
Automate tasks in MariaDB Server with stored routines. Learn to create and manage stored procedures and functions for enhanced database efficiency and code reusability.
Utilize stored functions in MariaDB Server. This section details creating, using, and managing user-defined functions to extend SQL capabilities and streamline data manipulation.
The CONNECT storage engine has been deprecated.
Explore different partitioning types for MariaDB Server tables. Understand range, list, hash, and key partitioning to optimize data management and improve query performance.
Learn effective data handling in MariaDB Server. This section covers data types, storage engines, data manipulation, and best practices for managing your information efficiently.
Master stored procedures in MariaDB Server. This section covers creating, executing, and managing these powerful routines to encapsulate complex logic and improve application performance.
testWhen prompted to enter your password, enter it. If your login is successful you should see something that looks similar to this:
This is where you will enter in all of your SQL statements. More about those later. For now, let's look at the components of the prompt: The "MariaDB" part means you that you are connected to a MariaDB database server. The word between the brackets is the name of your default database, the test database in this example.
To make changes to a database or to retrieve data, you will need to enter an SQL statement. SQL stands for Structured Query Language. An SQL statement that requests data is called a query. Databases store information in tables. They're are similar to spreadsheets, but much more efficient at managing data.
Note that the test database may not contain any data yet. If you want to follow along with the primer, copy and paste the following into the mariadb client. This will create the tables we will use and add some data to them. Don't worry about understanding them yet; we'll get to that later.
Notice the semi-colons used above. The mariadb client lets you enter very complex SQL statements over multiple lines. It won't send an SQL statement until you type a semi-colon and hit [Enter].
Let's look at what you've done so far. Enter the following:
Notice that this displays a list of the tables in the database. If you didn't already have tables in your test database, your results should look the same as above. Let's now enter the following to get information about one of these tables:
The main bit of information of interest to us is the Field column. The other columns provide useful information about the structure and type of data in the database, but the Field column gives us the names, which is needed to retrieve data from the table.
Let's retrieve data from the books table. We'll do so by executing a SELECT statement like so:
This SQL statement or query asks the database to show us all of the data in the books table. The wildcard ('*') character indicates to select all columns.
Suppose now that we want to add another book to this table. We'll add the book, Lair of Bones. To insert data into a table, you would use the INSERT statement. To insert information on a book, we would enter something like this:
Notice that we put a list of columns in parentheses after the name of the table, then we enter the keyword VALUES followed by a list of values in parentheses--in the same order as the columns were listed. We could put the columns in a different order, as long as the values are in the same order as we list the columns. Notice the message that was returned indicates that the execution of the SQL statement went fine and one row was entered.
Execute the following SQL statement again and see what results are returned:
You should see the data you just entered on the last row of the results. In looking at the data for the other books, suppose we notice that the title of the seventh book is spelled wrong. It should be spelled The Hobbit, not The Hobbbit. We will need to update the data for that row.
To change data in a table, you will use the UPDATE statement. Let's change the spelling of the book mentioned above. To do this, enter the following:
Notice the syntax of this SQL statement. The SET clause is where you list the columns and the values to set them. The WHERE clause says that you want to update only rows in which the BookID column has a value of 7, of which there are only one. You can see from the message it returned that one row matched the WHERE clause and one row was changed. There are no warnings because everything went fine. Execute the SELECT from earlier to see that the data changed.
As you can see, using MariaDB isn't very difficult. You just have to understand the syntax of SQL since it doesn't allow for typing mistakes or things in the wrong order or other deviations.
Explore LINEAR HASH partitioning, a variation of HASH that uses a powers-of-two algorithm for faster partition management at the cost of distribution.
PARTITION BY LINEAR HASH (partitioning_expression)
[PARTITIONS(number_of_partitions)]LINEAR HASH partitioning is a form of , similar to , in which the server takes care of the partition in which to place the data, ensuring a relatively even distribution among the partitions.
LINEAR HASH partitioning makes use of a powers-of-two algorithm, while HASH partitioning uses the modulus of the hashing function's value. Adding, dropping, merging and splitting partitions is much faster than with the , however, data is less likely to be evenly distributed over the partitions.
This page is licensed: CC BY-SA / Gnu FDL
Learn how to effectively use MariaDB Server. This section covers SQL statements, built-in functions, client utilities, and best practices for daily database operations.
Restore specific tables from a backup. Learn the process of importing individual .ibd files to recover specific tables without restoring the whole database.
When using mariadb-backup, you don't necessarily need to restore every table and/or partition that was backed up. Even if you're starting from a full backup, it is certainly possible to restore only certain tables and/or partitions from the backup, as long as the table or partition involved is in an . This page documents how to restore individual tables and partitions.
Before you can restore from a backup, you first need to prepare it to make the data files consistent. You can do so with the --prepare option.
Explore how to use replication as part of your backup strategy, allowing you to offload backup tasks to a replica server to reduce load on the primary.
can be used to support the strategy.
Replication alone is not sufficient for backup. It assists in protecting against hardware failure on the primary server, but does not protect against data loss. An accidental or malicious DROP DATABASE or TRUNCATE TABLE statement are replicated onto the replica as well. Care needs to be taken to prevent data getting out of sync between the primary and the replica.
Learn how MariaDB stores partitioned tables on the filesystem, typically creating separate .ibd files for each partition when using InnoDB.
A partitioned table is stored in multiple files. By default, these files are stored in the MariaDB (or InnoDB) data directory. It is possible to keep them in different paths by specifying table options. This is useful to store different partitions on different devices.
Note that, if the server system variable is set to 0 at the time of the table creation, all partitions are stored in the system tablespace.
The following files exist for each partitioned tables:
Understand how to retrieve metadata about partitions using the INFORMATION_SCHEMA.PARTITIONS table to monitor row counts and storage usage.
The table in the database contains information about partitions.
The statement contains a Create_options column, that contains the string 'partitioned' for partitioned tables.
The statement returns the statement that can be used to re-create a table, including the partitions definition.
This page is licensed: CC BY-SA / Gnu FDL
mariadb -u user_name -p -h ip_address db_nameMariaDB [test]>CREATE DATABASE IF NOT EXISTS test;
USE test;
CREATE TABLE IF NOT EXISTS books (
BookID INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
Title VARCHAR(100) NOT NULL,
SeriesID INT, AuthorID INT);
CREATE TABLE IF NOT EXISTS authors
(id INT NOT NULL PRIMARY KEY AUTO_INCREMENT);
CREATE TABLE IF NOT EXISTS series
(id INT NOT NULL PRIMARY KEY AUTO_INCREMENT);
INSERT INTO books (Title,SeriesID,AuthorID)
VALUES('The Fellowship of the Ring',1,1),
('The Two Towers',1,1), ('The Return of the King',1,1),
('The Sum of All Men',2,2), ('Brotherhood of the Wolf',2,2),
('Wizardborn',2,2), ('The Hobbbit',0,1);SHOW TABLES;
+----------------+
| Tables_in_test |
+----------------+
| authors |
| books |
| series |
+----------------+
3 rows in set (0.00 sec)DESCRIBE books;
+----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+--------------+------+-----+---------+----------------+
| BookID | int(11) | NO | PRI | NULL | auto_increment |
| Title | varchar(100) | NO | | NULL | |
| SeriesID | int(11) | YES | | NULL | |
| AuthorID | int(11) | YES | | NULL | |
+----------+--------------+------+-----+---------+----------------+SELECT * FROM books;
+--------+----------------------------+----------+----------+
| BookID | Title | SeriesID | AuthorID |
+--------+----------------------------+----------+----------+
| 1 | The Fellowship of the Ring | 1 | 1 |
| 2 | The Two Towers | 1 | 1 |
| 3 | The Return of the King | 1 | 1 |
| 4 | The Sum of All Men | 2 | 2 |
| 5 | Brotherhood of the Wolf | 2 | 2 |
| 6 | Wizardborn | 2 | 2 |
| 7 | The Hobbbit | 0 | 1 |
+--------+----------------------------+----------+----------+
7 rows in set (0.00 sec)INSERT INTO books (Title, SeriesID, AuthorID)
VALUES ("Lair of Bones", 2, 2);
Query OK, 1 row affected (0.00 sec)SELECT * FROM books;UPDATE books
SET Title = "The Hobbit"
WHERE BookID = 7;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0Learn about the Aria storage engine in MariaDB Server. Understand its features, advantages, and use cases, particularly for crash-safe operations and transactional workloads.
Optimize large tables in MariaDB Server with partitioning. Learn how to divide tables into smaller, manageable parts for improved performance, easier maintenance, and scalability.
Manage tables in MariaDB Server. This section details creating, altering, and dropping tables, along with understanding data types and storage engines for optimal database design.
Understand MariaDB Server's storage engines. Explore the features and use cases of InnoDB, Aria, MyISAM, and other engines to choose the best option for your specific data needs.
Learn to back up and restore MariaDB Server databases. This section covers essential strategies and tools to ensure data safety and quick recovery from potential data loss.
.cfg extension. For mariadb-backup to create these files, you also need to add the --export option during the prepare step.For example, you might execute the following command:
If this operation completes without error, then the backup is ready to be restored.
Note
mariadb-backup did not support the --export option to begin with. See MDEV-13466 about that. In earlier versions of MariaDB, this means that mariadb-backup could not create .cfg files for InnoDB file-per-table tablespaces during the --prepare stage. You can still import file-per-table tablespaces without the .cfg files in many cases, so it may still be possible in those versions to restore partial backups or to restore individual tables and partitions with just the .ibd files. If you have a full backup and you need to create .cfg files for InnoDB file-per-table tablespaces, then you can do so by preparing the backup as usual without the --export option, and then restoring the backup, and then starting the server. At that point, you can use the server's built-in features to copy the transportable tablespaces.
The restore process for restoring individual tables and/or partitions is quite different than the process for full backups.
Rather than using the --copy-back or the --move-back, each individual InnoDB file-per-table tablespace file will have to be manually imported into the target server. The process that is used to restore the backup will depend on whether partitioning is involved.
To restore individual non-partitioned tables from a backup, find the .ibd and .cfg files for the table in the backup, and then import them using the Importing Transportable Tablespaces for Non-partitioned Tables process.
To restore individual partitions or partitioned tables from a backup, find the .ibd and .cfg files for the partitions in the backup, and then import them using the Importing Transportable Tablespaces for Partitioned Tables process.
This page is licensed: CC BY-SA / Gnu FDL
The terms master and slave have historically been used in replication, and MariaDB has begun the process of adding primary and replica synonyms. The old terms will continue to be used to maintain backward compatibility - see MDEV-18777 to follow progress on this effort.
Replication is most commonly used to support backups as follows:
A primary server replicates to a replica
Backups are then run off the replica without any impact on the primary.
Backups can have a significant effect on a server, and a high-availability primary may not be able to be stopped, locked or simply handle the extra load of a backup. Running the backup from a replica has the advantage of being able to shutdown or lock the replica and perform a backup without any impact on the primary server.
Note that when backing up off a replica server, it is important to ensure that the servers keep the data in sync. See for example Replication and Foreign Keys for a situation when identical statements can result in different data on a replica and a primary.
This page is licensed: CC BY-SA / Gnu FDL
table_name#P#partition_name.ext
Normal files created by the storage engine use this pattern for names. The extension depends on the storage engine.
For example, an InnoDB table with 4 partitions will have the following files:
If we convert the table to MyISAM, we will have these files:
This page is licensed: CC BY-SA / Gnu FDL
table_name.frm
Contains the table definition. Non-partitioned tables have this file, too.
table_name.par
Contains the partitions definitions.
orders.frm
orders.par
orders#P#p0.ibd
orders#P#p1.ibd
orders#P#p2.ibd
orders#P#p3.ibdorders.frm
orders.par
orders#P#p0.MYD
orders#P#p0.MYI
orders#P#p1.MYD
orders#P#p1.MYI
orders#P#p2.MYD
orders#P#p2.MYI
orders#P#p3.MYD
orders#P#p3.MYILINEAR KEY partitioning makes use of a powers-of-two algorithm, while KEY partitioning uses modulo arithmetic to determine the partition number.
Adding, dropping, merging and splitting partitions is much faster than with the KEY partitioning type; however, data is less likely to be evenly distributed over the partitions.
This page is licensed: CC BY-SA / Gnu FDL
LINEAR PARTITION BY KEY ([column_names])
[PARTITIONS (number_of_partitions)]CREATE OR REPLACE TABLE t1 (v1 INT)
PARTITION BY LINEAR KEY (v1)
PARTITIONS 2;CREATE OR REPLACE TABLE t1 (c1 INT, c2 DATETIME)
PARTITION BY LINEAR HASH(TO_DAYS(c2))
PARTITIONS 5;This guide covers the fundamentals of creating database structures, inserting data, and retrieving information using the default MariaDB client.
MariaDB is a database system, a database server. To interface with the MariaDB server, you can use a client program, or you can write a program or script with one of the popular programming languages (e.g., PHP) using an API (Application Programming Interface) to interface with the MariaDB server. For the purposes of this article, we will focus on using the default client that comes with MariaDB called mariadb. With this client, you can either enter queries from the command-line, or you can switch to a terminal, that is to say, monitor mode. To start, we'll use the latter.
From the Linux command-line, you would enter the following to log in as the root user and to enter monitor mode:
The -u option is for specifying the user name. You would replace root here if you want to use a different user name. This is the MariaDB user name, not the Linux user name. The password for the MariaDB user root will probably be different from the Linux user root. Incidentally, it's not a good security practice to use the root user unless you have a specific administrative task to perform for which only root has the needed privileges.
The -p option above instructs the mariadb client to prompt you for the password. If the password for the root user hasn't been set yet, then the password is blank and you would just hit [Enter] when prompted. The -h option is for specifying the host name or the IP address of the server. This would be necessary if the client is running on a different machine than the server. If you've secure-shelled into the server machine, you probably won't need to use the host option. In fact, if you're logged into Linux as root, you won't need the user option—the -p is all you'll need. Once you've entered the line above along with the password when prompted, you are logged into MariaDB through the client. To exit, type quit or exit and press [Enter].
In order to be able to add and to manipulate data, you first have to create a database structure. Creating a database is simple. You would enter something like the following from within the :
This very minimal, first SQL statement will create a sub-directory called bookstore on the Linux filesystem in the directory which holds your MariaDB data files. It won't create any data, obviously. It'll just set up a place to add tables, which will in turn hold data. The second SQL statement above will set this new database as the default database. It will remain your default until you change it to a different one or until you log out of MariaDB.
The next step is to begin creating tables. This is only a little more complicated. To create a simple table that will hold basic data on books, we could enter something like the following:
This SQL statement creates the table books with six fields, or rather columns. The first column (isbn) is an identification number for each row—this name relates to the unique identifier used in the book publishing business. It has a fixed-width character type of 20 characters. It are the primary key column on which data are indexed. The column data type for the book title is a variable width character column of fifty characters at most. The third and fourth columns are used for identification numbers for the author and the publisher. They are integer data types. The fifth column is used for the publication year of each book. The last column is for entering a description of each book. It's a data type, which means that it's a variable width column and it can hold up to 65535 bytes of data for each row. There are several other data types that may be used for columns, but this gives you a good sampling.
To see how the table we created looks, enter the following SQL statement:
To change the settings of a table, you can use the statement. I'll cover that statement in another article. To delete a table completely (including its data), you can use the statement, followed by the table name. Be careful with this statement since it's not reversible.
The next table we'll create for our examples is the authors table to hold author information. This table will save us from having to enter the author's name and other related data for each book written by each author. It also helps to ensure consistency of data: there's less chance of inadvertent spelling deviations.
We'll join this table to the books table as needed. For instance, we would use it when we want a list of books along with their corresponding authors' names. For a real bookstore's database, both of these tables would probably have more columns. There would also be several more tables. For the examples that follow, these two tables as they are are enough.
Before moving on to the next step of adding data to the tables, let me point out a few minor items that I've omitted mentioning. SQL statements end with a semi-colon (or a \G). You can spread an SQL statement over multiple lines. However, it won't be passed to the server by the client until you terminate it with a semi-colon and hit [Enter]. To cancel an SQL statement once you've started typing it, enter \c and press [Enter].
As a basic convention, reserved words are printed in all capital letters. This isn't necessary, though. MariaDB is case-insensitive with regards to reserved words. Database and table names, however, are case-sensitive on Linux. This is because they reference the related directories and files on the filesystem. Column names aren't case sensitive since they're not affected by the filesystem, per se. As another convention, we use lower-case letters for structural names (e.g., table names). It's a matter of preference for deciding on names.
The primary method for entering data into a table is to use the statement. As an example, let's enter some information about an author into the authors table. We'll do that like so:
This will add the name and country of the author Franz Kafka to the authors table. We don't need to give a value for the author_id since that column was created with the option. MariaDB will automatically assign an identification number. You can manually assign one, especially if you want to start the count at a higher number than 1 (e.g., 1000). Since we are not providing data for all of the columns in the table, we have to list the columns for which we are giving data and in the order that the data is given in the set following the VALUES keyword. This means that we could give the data in a different order.
For an actual database, we would probably enter data for many authors. We'll assume that we've done that and move on to entering data for some books. Below is an entry for one of Kafka's books:
This adds a record for Kafka's book, The Castle. Notice that we mixed up the order of the columns, but it still works because both sets agree. We indicate that the author is Kafka by giving a value of 1 for the author_id. This is the value that was assigned by MariaDB when we entered the row for Kafka earlier. Let's enter a few more books for Kafka, but by a different method:
In this example, we've added three books in one statement. This allows us to give the list of column names once. We also give the keyword VALUES only once, followed by a separate set of values for each book, each contained in parentheses and separated by commas. This cuts down on typing and speeds up the process. Either method is fine and both have their advantages. To be able to continue with our examples, let's assume that data on thousands of books has been entered. With that behind us, let's look at how to retrieve data from tables.
The primary method of retrieving data from tables is to use a statement. There are many options available with the statement, but you can start simply. As an example, let's retrieve a list of book titles from the books table:
This will display all of the rows of books in the table. If the table has thousands of rows, MariaDB will display thousands. To limit the number of rows retrieved, we could add a clause to the statement like so:
This will limit the number of rows displayed to five. To be able to list the author's name for each book along with the title, you will have to join the books table with the authors table. To do this, we can use the clause like so:
Notice that the primary table from which we're drawing data is given in the FROM clause. The table to which we're joining is given in the clause along with the commonly named column (i.e., author_id) that we're using for the join.
To retrieve the titles of only books written by Kafka based on his name (not the author_id), we would use the WHERE clause with the statement. This would be entered like the following:
This statement will list the titles of Kafka books stored in the database. Notice that I've added the AS parameter next to the column name title to change the column heading in the results set to Kafka Books. This is known as an alias. Looking at the results here, we can see that the title for one of Kafka's books is incorrect. His book Amerika is spelled above with a "c" in the table instead of a "k". This leads to the next section on changing data.
In order to change existing data, a common method is to use the statement. When changing data, though, we need to be sure that we change the correct rows. In our example, there could be another book with the title America written by a different author. Since the key column isbn has only unique numbers and we know the ISBN number for the book that we want to change, we can use it to specify the row.
This will change the value of the title column for the row specified. We could change the value of other columns for the same row by giving the column = value for each, separated by commas.
If we want to delete a row of data, we can use the statement. For instance, suppose that our fictitious bookstore has decided no longer to carry books by John Grisham. By first running a statement, we determine the identification number for the author to be 2034. Using this author identification number, we could enter the following:
This statement will delete all rows from the table books for the author_id given. To do a clean job of it, we'll have to do the same for the authors table. We would just replace the table name in the statement above; everything else would be the same.
This is a very basic primer for using MariaDB. Hopefully, it gives you the idea of how to get started with MariaDB. Each of the SQL statements mentioned here have several more options and clauses each. We will cover these statements and others in greater detail in other articles. For now, though, you can learn more about them from experimenting and by further reading of the documentation online documentation.
This page is licensed: CC BY-SA / Gnu FDL
Learn how to design a robust backup strategy tailored to your business needs, balancing recovery time objectives and data retention policies.
The strategy applied when implementing data backups depends on business needs.
Data backup strategy depends on business needs. Business needs can be evaluated by performing a data inventory, determining data recovery objectives, considering the replication environment, and considering encryption requirements. Also critical is a backup storage strategy and testing backup and recovery procedures.
Backup strategy requirements flow from the understanding you build by performing a data inventory. A data inventory is established by asking questions such as:
What data is housed in the databases?
What business purpose does this data serve?
How long does the data needed to be retained in order to meet this business purpose?
Are there any legal or regulatory requirements, which would limit the length of data retention?
Data recovery requirements are often defined in terms of Recovery Point Objective (RPO) and Recovery Time Objective (RTO). RTO and RPO are considered in the context of the data identified in the .
Recovery Point Objective (RPO) defines the maximum amount of data a business is willing to lose. For example, a business can define a RPO of 24 hours.
Recovery Time Objective (RTO) defines how quickly a business needs to restore service in the event of a fault. For example, a business can define a RTO of 8 hours.
Backup strategy plays a substantial role in achieving RPO and RTO.
RPO depends on completion of backups, which provide a viable recovery point. Since RPO is measured at backup completion, not backup initiation, backup jobs must be scheduled at an interval smaller than the RPO.
Techniques for achieving RPO include:
Frequent incremental backups and less frequent full backups.
Performing backups in conjunction with replication and clustering to eliminate impact on production workloads, allowing a higher backup frequency.
Automated monitoring of backup status.
Automated testing of backups.
The RTO window typically commences at the point when a decision is made by the business to recover from backups, not at the start of an incident.
Techniques for achieving RTO include:
Leveraging information produced during incident response, which can reduce the set of data to restore from backups, or identify specific data validation requirements dependent on the nature of the incident.
Having fast access to backup data. Performance requirements of backup infrastructure should be understood for both backup and restoration workloads.
Using delayed replication, either within the same data center or to a different data center, can provide shorter path to recovery. This is particularly true when coupled with robust application monitoring, which allows intervention before the window of delay elapses.
MariaDB Enterprise Server supports several implementations of replication, which accurately duplicates data from one Server to one or more other Servers. The use of a dedicated replica as a source for backups can minimize workload impact.
MariaDB Enterprise Cluster implements virtually synchronous replication, where each Server instance contains a replica of all of the data for the Cluster. Backups can be performed from any node in the Cluster.
MariaDB Enterprise Server supports encryption on disk (data-at-rest encryption) and on the network (data-in-transit encryption).
MariaDB Enterprise Backup copies tablespaces from disk. When data-at-rest encryption is enabled, backups contain encrypted data.
MariaDB Enterprise Backup supports TLS encryption for communications with MariaDB Enterprise Server. To enable TLS encryption, set TLS options from the command-line or in the configuration file:
How backups are stored can impact backup viability. Backup storage also presents separate risks. These risks need to be carefully considered:
Backup data should always be stored separately from the system being backed up, and separate from the system used for recovery.
Backup data should be subject to equal or more controls than data in production databases. For example, backup data should generally be encrypted even where a decision has bee made that a production database will not use data-at-rest encryption.
Business requirements may define a need for offsite storage of backups as a means of guaranteeing delivery on RPO. In these cases you should also consider onsite storage of backups as a means of guaranteeing delivery on RTO.
Testing has been identified as a critical success factor for the successful operation of data systems.
Backups should be tested. Recovery using backups and recovery procedures should be tested.
This page is: Copyright © 2025 MariaDB. All rights reserved.
Secure and compress backup streams. Learn to pipe backup output to tools like GPG and GZIP for encryption and storage efficiency.
mariadb-backup supports streaming to stdout with the --stream=xbstream option. This option allows easy integration with popular encryption and compression tools. Below are several examples.
The following example creates an AES-encrypted backup, protected with the password "mypass" and stores it in a file "backup.xb.enc":
To decrypt and unpack this backup into the current directory, the following command can be used:
This example compresses the backup without encrypting:
We can decompress and unpack the backup as follows:
This example adds a compression step before the encryption, otherwise looks almost identical to the previous example:
We can decrypt, decompress and unpack the backup as follow (note gzip -d in the pipeline):
7zip archiver is a popular utility (especially on Windows) that supports reading from standard output, with the --si option, and writing to stdout with the -so option, and can thus be used together with mariadb-backup.
Compressing backup with the 7z command line utility works as follows:
Uncompress and unpack the archive with
7z also has builtin AES-256 encryption. To encrypt the backup from the previous example using password SECRET, add -pSECRET to the 7z command line.
Compress
Decompress , unpack
Encryption
Decrypt, unpack
Most of the described tools also provide a way to enter a passphrase interactively (although 7zip does not seem to work well when reading input from stdin). Please consult documentation of the tools for more info.
By default files like xtrabackup_checkpoints are also written to the output stream only, and so would not be available for taking further incremental backups without prior extraction from the compressed or encrypted stream output file.
To avoid this these files can additionally be written to a directory that can then be used as input for further incremental backups using the --extra-lsndir=... option.
See also e.g: Combining incremental backups with streaming output
This page is licensed: CC BY-SA / Gnu FDL
The DROP FUNCTION statement removes a stored function from the database, deleting its definition and associated privileges.
The DROP FUNCTION statement is used to drop a or a user-defined function (UDF). That is, the specified routine is removed from the server, along with all privileges specific to the function. You must have the ALTER ROUTINE for the routine in order to drop it. If the server system variable is set, both the ALTER ROUTINE and EXECUTE privileges are granted automatically to the routine creator - see .
The IF EXISTS clause is a MySQL/MariaDB extension. It prevents an error from occurring if the function does not exist. ANOTE is produced that can be viewed with .
For dropping a (UDF), see .
This page is licensed: GPLv2, originally from
Understand LIST partitioning, where rows are assigned to partitions based on whether a column value matches one in a defined list of values.
LIST partitioning is conceptually similar to RANGE partitioning. In both cases you decide a partitioning expression (a column, or a slightly more complex calculation) and use it to determine which partitions will contain each row. However, with the RANGE type, partitioning is done by assigning a range of values to each partition. With the LIST type, we assign a set of values to each partition. This is usually preferred if the partitioning expression can return a limited set of values.
A variant of this partitioning method, LIST COLUMNS, allows us to use multiple columns and more datatypes.
The last part of a statement can be the definition of the new table's partitions. In the case of LIST partitioning, the syntax is as follows:
PARTITION BY LIST indicates that the partitioning type is LIST.
The partitioning_expression is an SQL expression that returns a value from each row. In the simplest cases, it is a column name. This value is used to determine which partition should contain a row.
partition_name is the name of a partition.
value_list is a list of values. If partitioning_expression returns one of these values, the row are stored in this partition. If we try to insert something that does not belong to any of these value lists, the row are rejected with an error.
The DEFAULT partition catches all records which do not fit into other partitions.
LIST partitioning can be useful when we have a column that can only contain a limited set of values. Even in that case, RANGE partitioning could be used instead; but LIST partitioning allows us to equally distribute the rows by assigning a proper set of values to each partition.
This page is licensed: CC BY-SA / Gnu FDL
List of file types included in a backup. Understand which data files, logs, and configuration files are preserved during the backup process.
mariadb-backup backs up the files listed below.
mariadb-backup backs up the following InnoDB data files:
mariadb-backup will back up tables that use the storage engine. This data is located in the directory defined by the system variable. mariadb-backup backs this data up by performing a checkpoint using the system variable.
mariadb-backup will back up tables that use the MyRocks storage engine.
mariadb-backup also backs up files with the following extensions:
frm
isl
MYD
MYI
mariadb-backup does not back up the files listed below.
This page is licensed: CC BY-SA / Gnu FDL
This page details the restrictions on stored functions, such as the inability to return result sets or use transaction control statements.
The following restrictions apply to stored functions.
All of the restrictions listed in Stored Routine Limitations.
Any statements that return a result set are not permitted. For example, a regular SELECTs is not permitted, but a SELECT INTO is. A cursor and FETCH statement is permitted.
FLUSH statements are not permitted.
Statements that perform explicit or implicit commits or rollbacks are not permitted.
Cannot be used recursively.
Cannot make changes to a table that is already in use (reading or writing) by the statement invoking the stored function.
Cannot refer to a temporary table multiple times under different aliases, even in different statements.
ROLLBACK TO SAVEPOINT and RELEASE SAVEPOINT statement which are in a stored function cannot refer to a savepoint which has been defined out of the current function.
Prepared statements (, , ) cannot be used, and therefore nor can statements be constructed as strings and then executed.
This page is licensed: CC BY-SA / Gnu FDL
This page outlines constraints when using partitioning, such as the maximum number of partitions and restrictions on foreign keys and query cache usage.
The following limitations apply to partitioning in MariaDB:
Each table can contain a maximum of 8192 partitions.
Queries are never parallelized, even when they involve multiple partitions.
A table can only be partitioned if the storage engine supports partitioning.
All partitions must use the same storage engine. For a workaround, see .
A partitioned table cannot contain, or be referenced by, .
The is not aware of partitioning and partition pruning. Modifying a partition will invalidate the entries related to the whole table.
Updates can run more slowly when and a partitioned table is updated than an equivalent update of a non-partitioned table.
All columns used in the partitioning expression for a partitioned table must be part of every unique key that the table may have.
In versions prior to , it is not possible to create partitions on tables that contain .
contains information about existing partitions.
for suggestions on using partitions
This page is licensed: CC BY-SA / Gnu FDL
The ALTER PROCEDURE statement modifies the characteristics of an existing stored procedure, such as its security context or comment, without changing its logic.
This statement can be used to change the characteristics of a stored procedure. More than one change may be specified in an ALTER PROCEDURE statement. However, you cannot change the parameters or body of a stored procedure using this statement. To make such changes, you must drop and re-create the procedure using .
You must have the ALTER ROUTINE privilege for the procedure. By default, that privilege is granted automatically to the procedure creator. See .
This page is licensed: GPLv2, originally from
An introduction to the various partitioning strategies available in MariaDB, helping you choose the right method for your data distribution needs.
A partitioning type determines how a partitioned table's rows are distributed across partitions. Some partition types require the user to specify a partitioning expression that determines in which partition a row are stored.
The size of individual partitions depends on the partitioning type. Read and write performance are affected by the partitioning expression. Therefore, these choices should be made carefully.
MariaDB supports the following partitioning types:
This page is licensed: CC BY-SA / Gnu FDL
Stored routines have specific restrictions, such as prohibiting certain SQL statements (e.g., LOAD DATA) and disallowing result sets in functions.
The following SQL statements are not permitted inside any (, , or ).
; you can use instead.
and .
The DROP PROCEDURE statement permanently removes a stored procedure and its associated privileges from the database.
Understand how the optimizer automatically prunes irrelevant partitions and how to explicitly select partitions in your queries for efficiency.
When a WHERE clause is related to the partitioning expression, the optimizer knows which partitions are relevant for the query. Other partitions will not be read. This optimization is called partition pruning.
can be used to know which partitions are read for a given query. A column called partitions will contain a comma-separated list of the accessed partitions. For example:
Sometimes the WHERE clause does not contain the necessary information to use partition pruning, or the optimizer cannot infer this information. However, we may know which partitions are relevant for the query. We can force MariaDB to only access the specified partitions by adding a PARTITION clause. This feature is called partition selection. For example:
The PARTITION clause is supported for all DML statements:
This page explains the privileges required to create, alter, execute, and drop stored routines, including the automatic grants for creators.
$ mariadb-backup --prepare --export \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypasswordmariadb -u root -p -h localhostmariadb-backup --user=root --backup --stream=xbstream | openssl enc -aes-256-cbc -k mypass > backup.xb.encDROP FUNCTION [IF EXISTS] f_nameALTER PROCEDURE proc_name [characteristic ...]
characteristic:
{ CONTAINS SQL | NO SQL | READS SQL DATA | MODIFIES SQL DATA }
| SQL SECURITY { DEFINER | INVOKER }
| COMMENT 'string'INSERT DELAYED is permitted, but the statement is handled as a regular INSERT.
LOCK TABLES and UNLOCK TABLES.
References to local variables within prepared statements inside a stored routine (use user-defined variables instead).
BEGIN (WORK) is treated as the beginning of a BEGIN END block, not a transaction, so START TRANSACTION needs to be used instead.
The number of permitted recursive calls is limited to max_sp_recursion_depth. If this variable is 0 (default), recursivity is disabled. The limit does not apply to stored functions.
Most statements that are not permitted in prepared statements are not permitted in stored programs. See Prepare Statement:Permitted statements for a list of statements that can be used. SIGNAL, RESIGNAL and GET DIAGNOSTICS are exceptions, and may be used in stored routines.
There are also further limitations specific to the kind of stored routine.
Note that, if a stored program calls another stored program, the latter will inherit the caller's limitations. So, for example, if a stored procedure is called by a stored function, that stored procedure will not be able to produce a result set, because stored functions can't do this.
This page is licensed: CC BY-SA / Gnu FDL
Performing drills and exercises that periodically test recovery procedures to confirm readiness.
KEY takes an optional list of column_names, and the hashing function is given by the server.Just like HASH partitioning, in KEY partitioning the server takes care of the partition and ensures an even distribution among the partitions. However, the largest difference is that KEY partitioning makes use of column_names, and cannot accept a partitioning_expression which is based on column_names, in contrast to HASH partitioning, which can.
If no column_names are specified, the table's primary key is used if present, or not null unique key if no primary key is present. If neither of these keys are present, not specifying any column_names will result in an error:
Unlike other partitioning types, columns used for partitioning by KEY are not limited to integer or NULL values.
KEY partitions do not support column index prefixes. Any columns in the partitioning key that make use of column prefixes are not used.
The unique key must be NOT NULL:
KEY requires column_values if no primary key or not null unique key is present:
Primary key columns with index prefixes are silently ignored, so the following two queries are equivalent:
a(5) and c(5) are silently ignored in the former.
If all columns use index prefixes, the statement fails with a slightly misleading error:
This page is licensed: CC BY-SA / Gnu FDL
mariadb-backup --backup \
--target-dir=/data/backups/full \
--user=mariadb-backup \
--password=mbu_passwd \
--ssl-ca=/etc/my.cnf.d/certs/ca.pem \
--ssl-cert=/etc/my.cnf.d/certs/client-cert.pem \
--ssl-key=/etc/my.cnf.d/certs/client-key.pemDROP FUNCTION hello;
Query OK, 0 rows affected (0.042 sec)
DROP FUNCTION hello;
ERROR 1305 (42000): FUNCTION test.hello does not exist
DROP FUNCTION IF EXISTS hello;
Query OK, 0 rows affected, 1 warning (0.000 sec)
SHOW WARNINGS;
+-------+------+------------------------------------+
| Level | Code | Message |
+-------+------+------------------------------------+
| Note | 1305 | FUNCTION test.hello does not exist |
+-------+------+------------------------------------+PARTITION BY LIST (partitioning_expression)
(
PARTITION partition_name VALUES IN (value_list),
[ PARTITION partition_name VALUES IN (value_list), ... ]
[ PARTITION partition_name DEFAULT ]
)CREATE OR REPLACE TABLE t1 (
num TINYINT(1) NOT NULL
)
ENGINE = InnoDB
PARTITION BY LIST (num) (
PARTITION p0 VALUES IN (0,1),
PARTITION p1 VALUES IN (2,3),
PARTITION p2 DEFAULT
);ALTER PROCEDURE simpleproc SQL SECURITY INVOKER;PARTITION BY KEY ([column_names])
[PARTITIONS (number_of_partitions)] ERROR 1488 (HY000): Field in list of fields for partition function not found in tableCREATE OR REPLACE TABLE t1 (v1 INT)
PARTITION BY KEY (v1)
PARTITIONS 2;CREATE OR REPLACE TABLE t1 (v1 INT, v2 INT)
PARTITION BY KEY (v1,v2)
PARTITIONS 2;CREATE OR REPLACE TABLE t1 (
id INT NOT NULL PRIMARY KEY,
name VARCHAR(5)
)
PARTITION BY KEY()
PARTITIONS 2;CREATE OR REPLACE TABLE t1 (
id INT NOT NULL UNIQUE KEY,
name VARCHAR(5)
)
PARTITION BY KEY()
PARTITIONS 2;CREATE OR REPLACE TABLE t1 (
id INT NULL UNIQUE KEY,
name VARCHAR(5)
)
PARTITION BY KEY()
PARTITIONS 2;
ERROR 1488 (HY000): Field in list of fields for partition function not found in tableCREATE OR REPLACE TABLE t1 (
id INT NULL UNIQUE KEY,
name VARCHAR(5)
)
PARTITION BY KEY()
PARTITIONS 2;
ERROR 1488 (HY000): Field in list of fields for partition function not found in tableCREATE OR REPLACE TABLE t1 (
id INT NULL UNIQUE KEY,
name VARCHAR(5)
)
PARTITION BY KEY(name)
PARTITIONS 2;CREATE OR REPLACE TABLE t1 (
a VARCHAR(10),
b VARCHAR(10),
c VARCHAR(10),
PRIMARY KEY (a(5), b, c(5))
) PARTITION BY KEY() PARTITIONS 2;
CREATE OR REPLACE TABLE t1 (
a VARCHAR(10),
b VARCHAR(10),
c VARCHAR(10),
PRIMARY KEY (b)
) PARTITION BY KEY() PARTITIONS 2;CREATE OR REPLACE TABLE t1 (
a VARCHAR(10),
b VARCHAR(10),
c VARCHAR(10),
PRIMARY KEY (a(5), b(5), c(5))
) PARTITION BY KEY() PARTITIONS 2;
ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table's partitioning functionMAD
MAI
MRG
TRG
TRN
ARM
ARZ
CSM
CSV
opt
par
ALTER ROUTINEEXECUTE are granted automatically to the routine creator - see The IF EXISTS clause is a MySQL/MariaDB extension. It prevents an error from occurring if the procedure or function does not exist. ANOTE is produced that can be viewed with SHOW WARNINGS.
While this statement takes effect immediately, threads which are executing a procedure can continue execution.
IF EXISTS:
This page is licensed: GPLv2, originally from fill_help_tables.sql
In general, partition pruning is applied to statements contained in triggers.
However, note that if a BEFORE INSERT or BEFORE UPDATE trigger is defined on a table, MariaDB doesn't know in advance if the columns used in the partitioning expression are changed. For this reason, it is forced to lock all partitions.
This page is licensed: CC BY-SA / Gnu FDL
SUPER privilege is also required if statement-based binary logging is used. See To make changes to, or drop, a stored routine, the ALTER ROUTINE privilege is needed. The creator of a routine is temporarily granted this privilege if they attempt to change or drop a routine they created, unless the automatic_sp_privileges variable is set to 0 (it defaults to 1).
The SUPER privilege is also required if statement-based binary logging is used. See Binary Logging of Stored Routines for more details.
To run a stored routine, the EXECUTE privilege is needed. This is also temporarily granted to the creator if they attempt to run their routine unless the automatic_sp_privileges variable is set to 0.
The SQL SECURITY clause (by default DEFINER) specifies what privileges are used when a routine is called. If SQL SECURITY is INVOKER, the function body are evaluated using the privileges of the user calling the function. If SQL SECURITY is DEFINER, the function body is always evaluated using the privileges of the definer account. DEFINER is the default. Thus, by default, users who can access the database associated with the stored routine can also run the routine, and potentially perform operations they wouldn't normally have permissions for.
The creator of a routine is the account that ran the or statement, regardless of whether a DEFINER is provided. The definer is by default the creator unless otherwise specified.
The server automatically changes the privileges in the table as required, but will not look out for manual changes.
If left out, the DEFINER is treated as the account that created the stored routine or view. If the account creating the routine has the SUPER privilege, another account can be specified as the DEFINER.
This clause specifies the context the stored routine or view will run as. It can take two values - DEFINER or INVOKER. DEFINER is the account specified as the DEFINER when the stored routine or view was created (see the section above). INVOKER is the account invoking the routine or view.
As an example, let's assume a routine, created by a superuser who's specified as the DEFINER, deletes all records from a table. If SQL SECURITY=DEFINER, anyone running the routine, regardless of whether they have delete privileges, are able to delete the records. If SQL SECURITY = INVOKER, the routine will only delete the records if the account invoking the routine has permission to do so.
INVOKER is usually less risky, as a user cannot perform any operations they're normally unable to. However, it's not uncommon for accounts to have relatively limited permissions, but be specifically granted access to routines, which are then invoked in the DEFINER context.
All privileges that are specific to a stored routine are dropped when a DROP FUNCTION or DROP ROUTINE is run. However, if a CREATE OR REPLACE FUNCTION or CREATE OR REPLACE PROCEDURE is used to drop and replace and the routine, any privileges specific to that routine will not be dropped.
Changing the DEFINER of MySQL stored routines etc. - maria.com post on what to do after you've dropped a user, and now want to change the DEFINER on all database objects that currently have it set to this dropped user.
This page is licensed: CC BY-SA / Gnu FDL
Learn strategies for debugging SQL queries, including formatting for readability, using aliases effectively, and interpreting syntax errors.
Following a few conventions makes finding errors in queries a lot easier, especially when you ask for help from people who might know SQL, but know nothing about your particular schema. A query easy to read is a query easy to debug. Use whitespace to group clauses within the query. Choose good table and field aliases to add clarity, not confusion. Choose the syntax that supports the query's meaning.
A query hard to read is a query hard to debug. White space is free. New lines and indentation make queries easy to read, particularly when constructing a query inside a scripting language, where variables are interspersed throughout the query.
There is a syntax error in the following. How fast can you find it?
Here's the same query, with correct use of whitespace. Can you find the error faster?
Even if you don't know SQL, you might still have caught the missing ')' following team.teamId.
The exact formatting style you use isn't so important. You might like commas in the select list to follow expressions, rather than precede them. You might indent with tabs or with spaces. Adherence to some particular form is not important. Legibility is the only goal.
Aliases allow you to rename tables and fields for use within a query. This can be handy when the original names are very long, and is required for self joins and certain subqueries. However, poorly chosen aliases can make a query harder to debug, rather than easier. Aliases should reflect the original table name, not an arbitrary string.
Bad:
As the list of joined tables and the WHERE clause grow, it becomes necessary to repeatedly look back to the top of the query to see to which table any given alias refers.
Better:
Each alias is just a little longer, but the table initials give enough clues that anyone familiar with the database only need see the full table name once, and can generally remember which table goes with which alias while reading the rest of the query.
The manual warns against using the JOIN condition (that is, the ON clause) for restricting rows. Some queries, particularly those using implicit joins, take the opposite extreme - all join conditions are moved to the WHERE clause. In consequence, the table relationships are mixed with the business
logic.
Bad:
Without digging through the WHERE clause, it is impossible to say what links the two tables.
Better:
The relation between the tables is immediately obvious. The WHERE clause is left to limit rows in the result set.
Compliance with such a restriction negates the use of the comma operator to join tables. It is a small price to pay. Queries should be written using the explicit JOIN keyword anyway, and the two should never be mixed (unless you like rewriting all your queries every time a new version changes operator precedence).
Syntax errors are among the easiest problems to solve. MariaDB provides an error message showing the exact point where the parser became confused. Check the query, including a few words before the phrase shown in the error message. Most syntax and parsing errors are obvious after a second look, but some are more elusive, especially when the error text seems empty, points to a valid keyword, or seems to error on syntax that appears exactly correct.
Most syntax errors are easy to interpret. The error generally details the exact source of the trouble. A careful look at the query, with the error message in mind, often reveals an obvious mistake, such as misspelled field names, a missing 'AND', or an extra closing parenthesis. Sometimes the error is a little
less helpful. A frequent, less-than-helpful message:
The empty ' ' can be disheartening. Clearly there is an error, but where? A good place to look is at the end of the query. The ' ' suggests that the parser reached the end of the statement while still expecting some syntax token to appear.
Check for missing closers, such as ' and ):
Look for incomplete clauses, often indicated by an exposed comma:
MariaDB allows table and field names and aliases that are also . To prevent ambiguity, such names must be enclosed in backticks (`):
If the syntax error is shown near one of your identifiers, check if it appears on the .
A text editor with color highlighting for SQL syntax helps to find these errors. When you enter a field name, and it shows up in the same color as the SELECT keyword, you know something is amiss. Some common culprits:
DESC is a common abbreviation for "description" fields. It means "descending" in a MariaDB ORDER clause.
DATE, TIME, and TIMESTAMP are all common field names. They are also field types.
ORDER appears in sales applications. MariaDB uses it to specify sorting for results.
Some keywords are so common that MariaDB makes a special allowance to use them unquoted. My advice: don't. If it's a keyword, quote it.
As MariaDB adds new features, the syntax must change to support them. Most of the time, old syntax will work in newer versions of MariaDB. One notable exception is the change in precedence of the comma operator relative to the JOIN keyword in version 5.0. A query that used to work, such as
will now fail.
More common, however, is an attempt to use new syntax in an old version. Web hosting companies are notoriously slow to upgrade MariaDB, and you may find yourself using a version several years out of date. The result can be very frustrating when a query that executes flawlessly on your own workstation, running a recent installation, fails completely in your production environment.
This query fails in any version of MySQL prior to 4.1, when subqueries were added to the server:
This query fails in some early versions of MySQL, because the JOIN syntax did not originally allow an ON clause:
Always check the installed version of MariaDB, and read the section of the manual relevant for that version. The manual usually indicates exactly when particular syntax became available for use.
The initial version of this article was copied, with permission, from on 2012-10-05.
This page is licensed: CC BY-SA / Gnu FDL
Back up specific databases or tables. This guide explains how to filter your backup to include only the data you need.
When using mariadb-backup, you have the option of performing partial backups. Partial backups allow you to choose which databases or tables to backup, as long as the table or partition involved is in an InnoDB file-per-table tablespace.This page documents how to perform partial backups.
Just like with full backups, in order to back up the database, you need to run mariadb-backup with the --backup option to tell it to perform a backup and with the --target-dir option to tell it where to place the backup files. The target directory must be empty or not exist.
For a partial backup, there are a few other arguments that you can provide as well:
To tell it which databases to backup, you can provide the --databases option.
To tell it which databases to exclude from the backup, you can provide the --databases-exclude option.
To tell it to check a file for the databases to backup, you can provide the --databases-file option.
The non-file partial backup options support regex in the database and table names.
For example, to take a backup of any database that starts with the string app1_ and any table in those databases that start with the string tab_, run the following command:
You can use the --history option with a partial backup to log the operation in the history table for auditing purposes.
You cannot use a partial backup as the base for an incremental backup history chain. The --incremental-history-name option is incompatible with partial backups because restoring partial incrementals requires specific preparation steps (--export) that the history feature does not automate.
The time the backup takes depends on the size of the databases or tables you're backing up. You can cancel the backup if you need to, as the backup process does not modify the database.
mariadb-backup writes the backup files to the target directory. If the target directory doesn't exist, then it creates it. If the target directory exists and contains files, then it raises an error and aborts.
Just like with full backups, the data files that mariadb-backup creates in the target directory are not point-in-time consistent, given that the data files are copied at different times during the backup operation. If you try to restore from these files, InnoDB notices the inconsistencies and crashes to protect you from corruption. In fact, for partial backups, the backup is not even a completely functional MariaDB data directory, so InnoDB would raise more errors than it would for full backups. This point will also be very important to keep in mind during the restore process.
Before you can restore from a backup, you first need to prepare it to make the data files consistent. You can do so with the --prepare command option.
Partial backups rely on InnoDB's transportable tablespaces. For MariaDB to import tablespaces like these, InnoDB looks for a file with a .cfg extension. For mariadb-backup to create these files, you also need to add the --export option during the prepare step.
For example, you might execute the following command:
If this operation completes without error, then the backup is ready to be restored.
mariadb-backup did not support the --export option. See about that. This means that mariadb-backup could not create .cfg files for InnoDB file-per-table tablespaces during the --prepare stage. You can still import file-per-table tablespaces without the .cfg files in many cases, so it may still be possible in those versions to restore partial backups or to restore individual tables and partitions with just the .ibd files. If you have a full backup and you need to create .cfg files for InnoDB file-per-table tablespaces, then you can do so by preparing the backup as usual without the --export option, and then restoring the backup, and then starting the server. At that point, you can use the server's built-in features to copy the transportable tablespaces.
The restore process for partial backups is quite different than the process for full backups. A partial backup is not a completely functional data directory. The data dictionary in the InnoDB system tablespace will still contain entries for the databases and tables that were not included in the backup.
Rather than using the --copy-back or the --move-back, each individual InnoDB file-per-table tablespace file will have to be manually imported into the target server. The process that is used to import the file will depend on whether partitioning is involved.
To restore individual non-partitioned tables from a backup, find the .ibd and .cfg files for the table in the backup, and then import them using the Importing Transportable Tablespaces for Non-partitioned Tables process.
To restore individual partitions or partitioned tables from a backup, find the .ibd and .cfg files for the partitions in the backup, and then import them using the process.
This page is licensed: CC BY-SA / Gnu FDL
Learn how to use dbForge Studio, a GUI tool, to perform backup and restore operations for MariaDB databases visually.
In the modern world, data importance is non-negotiable, and keeping data integrity and consistency is the top priority. Data stored in databases is vulnerable to system crashes, hardware problems, security breaches, and other failures causing data loss or corruption. To prevent database damage, it is important to back the data up regularly and implement the data restore policies. MariaDB, one of the most popular database management systems, provides several methods to configure routines for backing up and recovering data. The current guideline illustrates both processes performed with the help of dbForge Studio for MySQL which is also a fully-functional GUI client for MariaDB that has everything you need to accomplish the database-related tasks on MariaDB.
dbForge Studio for MySQL and MariaDB has a separate module dedicated to the data backing up and recovering jobs. Let us first look at how set the tool to create a MariaDB backup. Launch the Studio and go to Database > Backup and Restore > Backup Database. The Database Backup Wizard with several pages will appear. On the General page, specify the database in question and how to connect to it, then choose where to save the created backup file, and specify its name. There are additional optional settings – you can select to delete old files automatically, zip the output backup file, etc. When done, click Next.
On the Backup content page, select the objects to back up. Click Next.
The Options page. Here you can specify the details of the data backing up process. Plenty of available options allow you to configure this task precisely to meet the specific requirements. When done, click Next.
The Errors handling page. Here you configure how the Studio should handle the errors that might occur during the backing up process. Also, you can set the Studio to write the information about the errors it encountered into the log file.
You can save the project settings to apply them in the future. For this, in the left bottom corner of the Wizard, select one of the saving options: Save Project or Save Command Line. The latter allows saving settings as a backup script which you can execute from the command line at any time later.
The configuration process is complete. Click Backup to launch the data backing up.
Note: It is not obligatory to go through all the pages of the Wizard. The Backup button is available no matter on which page you are. Thus, you can launch the process of backing the data up whenever you have set everything you needed.
After you have clicked Backup, dbForge Studio for MySQL starts to create a MariaDB backup.
When this is done, you will see the confirmation message. Click Finish.
Backup and restore policies suggest creating regular backups on a daily, weekly, monthly, quarterly, and yearly basis. Besides, to minimize the consequences of possible data loss, it is highly recommended make a backup before making any changes to a database, such as upgrading, modifying data, redesigning the structure, etc. Simply speaking, you always need a fresh backup to restore the most up-to-date database version. To ensure regular backups on schedule, you can use a batch file created with the help of the Studio and Windows Task Scheduler, where you need to create and schedule the backup task.
This is an even faster task, done in half as many steps.
The process of data recovery from the backup file is simple. It only takes several clicks: Launch dbForge Studio for MySQL and go to Database > Backup and Restore > Restore Database. The Database Restore Wizard will appear. Specify the database name, its connection parameters, and the path to the backup file you want to restore. Then click Restore, and the process will start immediately.
When the process is complete, click Finish.
More information about this essential feature is available on the – it explores the routines performed on MySQL, but they fully apply to MariaDB backups. You can use the same IDE and the same workflow.
To test-drive this and other features of the Studio (the IDE includes all the tools necessary for the development, management, and administration of databases on MariaDB), . dbForge Studio for MySQL and MariaDB boasts truly advanced functionality that will help your teams deliver more value.
This page is licensed: CC BY-SA / Gnu FDL
Learn how to perform and restore full physical backups of MariaDB databases using the mariadb-backup tool, ensuring consistent data recovery.
When using mariadb-backup, you have the option of performing a full or an incremental backup. Full backups create a complete backup of the database server in an empty directory while incremental backups update a previous backup with whatever changes to the data have occurred since the backup. This page documents how to perform full backups.
In order to back up the database, you need to run mariadb-backup with the --backup option to tell it to perform a backup and with the --target-dir option to tell it where to place the backup files. When taking a full backup, the target directory must be empty or it must not exist.
To take a backup, run the following command:
The time the backup takes depends on the size of the databases or tables you're backing up. You can cancel the backup if you need to, as the backup process does not modify the database.
mariadb-backup writes the backup files the target directory. If the target directory doesn't exist, it creates it. If the target directory exists and contains files, it raises an error and aborts.
Here is an example backup directory:
You can optionally use the --history option to record metadata about your full backup in the database. This creates a centralized log and allows future incremental backups to reference this full backup by name instead of by directory path.
Privileges: The backup user requires INSERT, CREATE, and ALTER privileges on the history table (mysql.mariadb_backup_history in MariaDB 10.11+, or PERCONA_SCHEMA.xtrabackup_history in older versions).
Failure Case: If the user lacks privileges, the backup will complete the file copy process but will fail at the final step with an INSERT command denied error.
The data files that mariadb-backup creates in the target directory are not point-in-time consistent, given that the data files are copied at different times during the backup operation. If you try to restore from these files, InnoDB notices the inconsistencies and crashes to protect you from corruption
Before you can restore from a backup, you first need to prepare it to make the data files consistent. You can do so with the --prepare option.
Run mariadb-backup --backup. You must use a version of mariadb-backup that is compatible with the server version you are planning to upgrade from. For instance, when upgrading from MariaDB 10.4 to 10.5, you must use the 10.4 version of mariadb-backup, Another example: When upgrading from MariaDB 10.6 to 10.11, you must use the 10.6 version of mariadb-backup.
Run mariadb-backup --prepare, again using a compatible version of mariadb-backup, as described in the previous step.
Once the backup is complete and you have prepared the backup for restoration (previous step), you can restore the backup using either the --copy-back or the --move-back options. The --copy-back option allows you to keep the original backup files. The --move-back option actually moves the backup files to the datadir, so the original backup files are lost.
First, stop the MariaDB Server process.
Then, ensure that datadir is empty.
Then, run mariadb-backup with one of the options mentioned above:
Then, you may need to fix the file permissions.
When mariadb-backup restores a database, it preserves the file and directory privileges of the backup. However, it writes the files to disk as the user and group restoring the database. As such, after restoring a backup, you may need to adjust the owner of the data directory to match the user and group for the MariaDB Server, typically mysql for both. For example, to recursively change ownership of the files to the mysql user and group, you could execute:
Finally, start the MariaDB Server process.
Once a full backup is prepared, it is a fully functional MariaDB data directory. Therefore, as long as the MariaDB Server process is stopped on the target server, you can technically restore the backup using any file copying tool, such as cp or rsync. For example, you could also execute the following to restore the backup:
This page is licensed: CC BY-SA / Gnu FDL
This guide provides an introduction to the various backup and restore methods available in MariaDB, helping you choose the right strategy for your data.
This article briefly discusses the main ways to backup MariaDB. For detailed descriptions and syntax, see the individual pages. More detail is in the process of being added.
Logical backups consist of the SQL statements necessary to restore the data, such as CREATE DATABASE, CREATE TABLE and INSERT.
Physical backups are performed by copying the individual data files or directories.
The main differences are as follows:
logical backups are more flexible, as the data can be restored on other hardware configurations, MariaDB versions or even on another DBMS, while physical backups cannot be imported on significantly different hardware, a different DBMS, or potentially even a different MariaDB version.
logical backups can be performed at the level of database and table, while physical databases are the level of directories and files. In the and storage engines, each table has an equivalent set of files.
logical backups are larger in size than the equivalent physical backup.
logical backups takes more time to both backup and restore than the equivalent physical backup.
log files and configuration files are not part of a logical backup
mariadb-backupThe program is a fork of with added support for and .
mariadb-dump(previously mysqldump) performs a logical backup. It is the most flexible way to perform a backup and restore, and a good choice when the data size is relatively small.
For large datasets, the backup file can be large, and the restore time lengthy.
mariadb-dump dumps the data into SQL format (it can also dump into other formats, such as CSV or XML) which can then easily be imported into another database. The data can be imported into other versions of MariaDB, MySQL, or even another DBMS entirely, assuming there are no version or DBMS-specific statements in the dump.
mariadb-dump dumps triggers along with tables, as these are part of the table definition. However, , , and are not, and need extra parameters to be recreated explicitly (for example, --routines and --events). and are however also part of the system tables (for example ).
InnoDB uses the , which stores data and indexes from its tables in memory. This buffer is very important for performance. If InnoDB data doesn't fit the memory, it is important that the buffer contains the most frequently accessed data. However, last accessed data is candidate for insertion into the buffer pool. If not properly configured, when a table scan happens, InnoDB may copy the whole contents of a table into the buffer pool. The problem with logical backups is that they always imply full table scans.
An easy way to avoid this is by increasing the value of the system variable. It represents the number of milliseconds that must pass before a recently accessed page can be put into the "new" sublist in the buffer pool. Data which is accessed only once should remain in the "old" sublist. This means that they will soon be evicted from the buffer pool. Since during the backup process the "old" sublist is likely to store data that is not useful, one could also consider resizing it by changing the value of the system variable.
It is also possible to explicitly dump the buffer pool on disk before starting a logical backup, and restore it after the process. This will undo any negative change to the buffer pool which happens during the backup. To dump the buffer pool, the system variable can be set to ON. To restore it, the system variable can be set to ON.
mariadb-dump ExamplesBacking up a single database
Restoring or loading the database
See the page for detailed syntax and examples.
mariadb-hotcopyperforms a physical backup, and works only for backing up and tables. It can only be run on the same machine as the location of the database directories.
mariadb-hotcopy ExamplesPercona XtraBackup is not supported in MariaDB. is the recommended backup method to use instead of Percona XtraBackup. See for more information.
is a tool for performing fast, hot backups. It was designed specifically for databases, but can be used with any storage engine (although not with and ). It is not included with MariaDB.
Some filesystems, like Veritas, support snapshots. During the snapshot, the table must be locked. The proper steps to obtain a snapshot are:
From the client, execute . The client must remain open.
From a shell, execute mount vxfs snapshot
The client can execute .
Copy the snapshot files.
Widely-used physical backup method, using a Perl script as a wrapper. See for more information.
For details, see:
Besides the system utilities, it is possible to use third-party GUI tools to perform backup and restore operations. In this context, it is worth mentioning dbForge Studio for MySQL, a feature-rich database IDE that is fully compatible with MariaDB and delivers extensive backup functionality.
The backup and restore module of the Studio allows precise up to particular database objects. The feature of scheduling regular backups offers specific settings to handle errors and keep a log of them. Additionally, settings and configurations can be saved for later reuse.
These operations are wizard-aided allowing users to set up all tasks in a visual mode.
(mariadb.com blog)
This page is licensed: CC BY-SA / Gnu FDL
Stored procedures are precompiled collections of SQL statements stored on the server, allowing for encapsulated logic, parameterized execution, and improved application performance.
A Stored Procedure is a routine invoked with a CALL statement. It may have input parameters, output parameters and parameters that are both input parameters and output parameters.
Here's a skeleton example to see a stored procedure in action:
First, the delimiter is changed, since the function definition will contain the regular semicolon delimiter. The procedure is named Reset_animal_count. MODIFIES SQL DATA indicates that the procedure will perform a write action of sorts, and modify data. It's for advisory purposes only. Finally, there's the actual SQL statement - an UPDATE.
A more complex example, with input parameters, from an actual procedure used by banks:
See for full syntax details.
Security is a key reason. Banks commonly use stored procedures so that applications and users don't have direct access to the tables. Stored procedures are also useful in an environment where multiple languages and clients are all used to perform the same operations.
To find which stored functions are running on the server, use .
or query the in the INFORMATION_SCHEMA database directly:
To find out what the stored procedure does, use .
To drop a stored procedure, use the statement.
To change the characteristics of a stored procedure, use . However, you cannot change the parameters or body of a stored procedure using this statement; to make such changes, you must drop and re-create the procedure using (which retains existing privileges), or DROP PROCEDURE followed CREATE PROCEDURE .
See the article .
This page is licensed: CC BY-SA / Gnu FDL
When binary logging is enabled, stored routines may require special handling (like SUPER privileges) if they are non-deterministic, to ensure consistent replication.
Binary logging can be row-based, statement-based, or a mix of the two. See Binary Log Formats for more details on the formats. If logging is statement-based, it is possible that a statement will have different effects on the master and on the slave.
Stored routines are particularly prone to this, for two main reasons:
stored routines can be non-deterministic, in other words non-repeatable, and therefore have different results each time they are run.
the slave thread executing the stored routine on the slave holds full privileges, while this may not be the case when the routine was run on the master.
The problems with replication will only occur with statement-based logging. If row-based logging is used, since changes are made to rows based on the master's rows, there is no possibility of the slave and master getting out of sync.
By default, with row-based replication, triggers run on the master, and the effects of their executions are replicated to the slaves. However, it is possible to run triggers on the slaves. See .
If the following criteria are met, then there are some limitations on whether stored routines can be created:
The is enabled, and the system variable is set to STATEMENT. See for more information.
The is set to OFF, which is the default value.
If the above criteria are met, then the following limitations apply:
When a is created, it must be declared as either DETERMINISTIC, NO SQL or READS SQL DATA, or else an error will occur. MariaDB cannot check whether a function is deterministic, and relies on the correct definition being used.
To create or modify a stored function, a user requires the SUPER privilege as well as the regular privileges. See for these details.
A deterministic function:
A non-deterministic function, since it uses the function:
This page is licensed: CC BY-SA / Gnu FDL
A list of status variables specific to the Aria engine, providing metrics on page cache usage, transaction log syncs, and other internal operations.
This page documents status variables related to the Aria storage engine. See Server Status Variables for a complete list of status variables that can be viewed with SHOW STATUS.
See also the Full list of MariaDB options, system and status variables.
Aria_pagecache_blocks_not_flushedDescription: The number of dirty blocks in the Aria page cache. The global value can be flushed by .
Scope: Global
Data Type: numeric
Aria_pagecache_blocks_unusedDescription: Free blocks in the Aria page cache. The global value can be flushed by .
Scope: Global
Data Type: numeric
Aria_pagecache_blocks_usedDescription: Blocks used in the Aria page cache. The global value can be flushed by .
Scope: Global
Data Type: numeric
Aria_pagecache_read_requestsDescription: The number of requests to read something from the Aria page cache.
Scope: Global
Data Type: numeric
Aria_pagecache_readsDescription: The number of Aria page cache read requests that caused a block to be read from the disk.
Scope: Global
Data Type: numeric
Aria_pagecache_write_requestsDescription: The number of requests to write a block to the Aria page cache.
Scope: Global
Data Type: numeric
Aria_pagecache_writesDescription: The number of blocks written to disk from the Aria page cache.
Scope: Global
Data Type: numeric
Aria_transaction_log_syncsDescription: The number of Aria log fsyncs.
Scope: Global
Data Type: numeric
This page is licensed: CC BY-SA / Gnu FDL
The CONNECT storage engine has been deprecated.
This storage engine has been deprecated.
CONNECT is not just a new “YASE” (Yet another Storage Engine) that provides another way to store data with additional features. It brings a new dimension to MariaDB, already one of the best products to deal with traditional database transactional applications, further into the world of business intelligence and data analysis, including NoSQL facilities. Indeed, BI is the set of techniques and tools for the transformation of raw data into meaningful and useful information. And where is this data?
"It's amazing in an age where relational databases reign supreme when it comes to managing data that so much information still exists outside RDBMS engines in the form of flat files and other such constructs. In most enterprises, data is passed back and forth between disparate systems in a fashion and speed that would rival the busiest expressways in the world, with much of this data existing in common, delimited files. Target systems intercept these source files and then typically proceed to load them via ETL (extract, transform, load) processes into databases that then utilize the information for business intelligence, transactional functions, or other standard operations. ETL tasks and data movement jobs can consume quite a bit of time and resources, especially if large volumes of data are present that require loading into a database. This being the case, many DBAs welcome alternative means of accessing and managing data that exists in file format."
Robin Schumacher[]
What he describes is known as MED (Management of External Data) enabling the handling of data not stored in a DBMS database as if it were stored in tables. An ISO standard exists that describes one way to implement and use MED in SQL by defining foreign tables for which an external FDW (Foreign Data Wrapper) has been developed in C.
However, since this was written, a new source of data was developed as the “cloud”. Data are existing worldwide and, in particular, can be obtained in JSON or XML format in answer to REST queries. From , it is possible to create JSON, XML or CSV tables based on data retrieved from such REST queries.
MED as described above is a rather complex way to achieve this goal and MariaDB does not support the ISO SQL/MED standard. But, to cover the need, possibly in transactional but mostly in decision support applications, the CONNECT storage engine supports MED in a much simpler way.
The main features of CONNECT are:
No need for additional SQL language extensions.
Embedded wrappers for many external data types (files, data sources, virtual).
NoSQL query facilities for , , HTML files and using JSON UDFs.
NoSQL data obtained from REST queries (requires cpprestsdk).
With CONNECT, MariaDB has one of the most advanced implementations of MED and NoSQL, without the need for complex additions to the SQL syntax (foreign tables are "normal" tables using the CONNECT engine).
Giving MariaDB easy and natural access to external data enables the use of all of its powerful functions and SQL-handling abilities for developing business intelligence applications.
With version 1.07 of CONNECT, retrieving data from REST queries is available in all binary distributed version of MariaDB, and, from 1.07.002, CONNECT allows workspaces greater than 4GB.
Robin Schumacher is Vice President Products at DataStax and former Director of Product Management at MySQL. He has over 13 years of database experience in DB2, MySQL, Oracle, SQL Server and other database engines.
Discover these variants that allow partitioning based on multiple columns and non-integer types, offering greater flexibility than standard RANGE/LIST.
RANGE COLUMNS and LIST COLUMNS are variants of, respectively, RANGE and LIST. With these partitioning types, there is not a single partitioning expression; instead, a list of one or more columns is accepted. The following rules apply:
The list can contain one or more columns.
Only bare columns are permitted; no expressions.
All the specified columns are compared to the specified values to determine which partition should contain a specific row. See below for details.
The last part of a statement can be definition of the new table's partitions. In the case of RANGE COLUMNS partitioning, the syntax is as follows:
The syntax for LIST COLUMNS is as follows:
partition_name is the name of a partition.
To determine which partition should contain a row, all specified columns are compared to each partition definition.
With LIST COLUMNS, a row matches a partition if all row values are identical to the specified values. At most one partition can match the row.
With RANGE COLUMNS, a row matches a partition if it is less than the specified value tuple in lexicographic order. The first partition that matches the row values are used.
The DEFAULT partition catches all records which do not fit in other partitions. Only one DEFAULT partition is allowed.
RANGE COLUMNS partition:
LIST COLUMNS partition:
This page is licensed: CC BY-SA / Gnu FDL
Learn to import data into MariaDB with LOAD DATA INFILE and mariadb-import. This guide covers bulk loading, handling duplicates, and converting foreign data formats.
When a MariaDB developer first creates a MariaDB database for a client, often times the client has already accumulated data in other, simpler applications. Being able to convert data easily to MariaDB is critical. In the previous two articles of this MariaDB series, we explored how to set up a database and how to query one. In this third installment, we will introduce some methods and tools for bulk importing of data into MariaDB. This isn't an overly difficult task, but the processing of large amounts of data can be intimidating for a newcomer and as a result it can be a barrier to getting started with MariaDB. Additionally, for intermediate developers, there are many nuances to consider for a clean import, which is especially important for automating regularly scheduled imports. There are also restraints to deal with that may be imposed on a developer when using a web hosting company.
Clients sometimes give developers raw data in formats created by simple database programs like MS Access ®. Since non-technical clients don't typically understand database concepts, new clients often give me their initial data in Excel spreadsheets. Let's first look at a simple method for importing data. The simplest way to deal with incompatible data in any format is to load it up in its original software and to export it out to a delimited text file. Most applications have the ability to export data to a text format and will allow the user to set the delimiters. We like to use the bar (i.e.,
Identify and resolve common connection problems, including server status checks, authentication errors, and network configuration.
If you are completely new to MariaDB and relational databases, you may want to start with the . Also, make sure you understand the connection parameters discussed in the article.
There are a number of common problems that can occur when connecting to MariaDB.
If the error you get is something like:
or
the server is either not running, or not running on the specified port, socket or pipe. Make sure you are using the correct host, port, pipe, socket and protocol options, or alternatively, see
A Stored Function is a set of SQL statements that can be called by name, accepts parameters, and returns a single value, enhancing SQL with custom logic.
A Stored Function is a defined function that is called from within an SQL statement like a regular function and returns a single value.
Here's a skeleton example to see a stored function in action:
First, the delimiter is changed, since the function definition will contain the regular semicolon delimiter. See for more. Then the function is named FortyTwo and defined to return a tinyin. The DETERMINISTIC
A quick reference guide for essential SQL statements in MariaDB, categorized by Data Definition, Data Manipulation, and Transaction Control.
Initialize a replication slave using a backup. This guide shows how to use mariadb-backup to provision a new replica from a master server.
This page documents how to set up a replica from a backup.
If you are using MariaDB Galera Cluster, then you may want to try one of the following pages instead:
The CONNECT storage engine has been deprecated.
This storage engine has been deprecated.
The main characteristic of is to enable accessing data scattered on a machine as if it was a centralized database. This, and the fact that locking is not used by connect (data files are open and closed for each query) makes CONNECT very useful for importing or exporting data into or from a MariaDB database and also for all types of Business Intelligence applications. However, it is not suited for transactional applications.
For instance, the index type used by CONNECT is closer to bitmap indexing than to B-trees. It is very fast for retrieving result but not when updating is done. In fact, even if only one indexed value is modified in a big table, the index is entirely remade (yet this being four to five times faster than for a b-tree index). But normally in Business Intelligence applications, files are not modified so often.
If you are using CONNECT to analyze files that can be modified by an external process, the indexes are of course not modified by it and become outdated. Use the OPTIMIZE TABLE command to update them before using the tables based on them.
The Archive storage engine is optimized for high-speed insertion and compression of large amounts of data, suitable for logging and auditing.
The ARCHIVE storage engine is a storage engine that uses gzip to compress rows. It is mainly used for storing large amounts of data, without indexes, with only a very small footprint.
A table using the ARCHIVE storage engine is stored in two files on disk. There's a table definition file with an extension of .frm, and a data file with the extension .ARZ. At times during optimization, a .ARN file will appear.
New rows are inserted into a compression buffer and are flushed to disk when needed. SELECTs cause a flush. Sometimes, rows created by multi-row inserts are not visible until the statement is complete.
ARCHIVE allows a maximum of one key. The key must be on an column, and can be a PRIMARY KEY or a non-unique key. However, it has a limitation: it is not possible to insert a value which is lower than the next AUTO_INCREMENT value.
Stored Aggregate Functions allow users to create custom aggregate functions that process a sequence of rows and return a single summary result.
are functions that are computed over a sequence of rows and return one result for the sequence of rows.
Creating a custom aggregate function is done using the statement with two main differences:
The addition of the AGGREGATE keyword, so CREATE AGGREGATE FUNCTION
The FETCH GROUP NEXT ROW instruction inside the loop
Understand the different row formats supported by Aria, particularly the default PAGE format which enables crash safety and better concurrency.
The storage engine supports three different table storage formats.
These are FIXED, DYNAMIC and PAGE, and they can be set with the ROW FORMAT option in the statement. PAGE is the default format, while FIXED and DYNAMIC are essentially the same as the .
The statement can be used to see the storage format used by a table.
Fixed-length (or static) tables contain records of a fixed-length. Each column is the same length for all records, regardless of the actual contents. It is the default format if a table has no BLOB, TEXT, VARCHAR or VARBINARY fields, and no ROW FORMAT is provided. You can also specify a fixed table with ROW_FORMAT=FIXED in the table definition.
DROP PROCEDURE [IF EXISTS] sp_nameDROP PROCEDURE simpleproc;DROP PROCEDURE simpleproc;
ERROR 1305 (42000): PROCEDURE test.simpleproc does not exist
DROP PROCEDURE IF EXISTS simpleproc;
Query OK, 0 rows affected, 1 warning (0.00 sec)
SHOW WARNINGS;
+-------+------+------------------------------------------+
| Level | Code | Message |
+-------+------+------------------------------------------+
| Note | 1305 | PROCEDURE test.simpleproc does not exist |
+-------+------+------------------------------------------+EXPLAIN PARTITIONS SELECT * FROM orders WHERE id < 15000000;
+------+-------------+--------+------------+-------+---------------+---------+---------+------+------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+--------+------------+-------+---------------+---------+---------+------+------+-------------+
| 1 | SIMPLE | orders | p0,p1 | range | PRIMARY | PRIMARY | 4 | NULL | 2 | Using where |
+------+-------------+--------+------------+-------+---------------+---------+---------+------+------+-------------+SELECT * FROM orders PARTITION (p3) WHERE user_id = 50;
SELECT * FROM orders PARTITION (p2,p3) WHERE user_id >= 40;CREATE DATABASE bookstore;
USE bookstore;CREATE TABLE books (
isbn CHAR(20) PRIMARY KEY,
title VARCHAR(50),
author_id INT,
publisher_id INT,
year_pub CHAR(4),
description TEXT );DESCRIBE books;
+--------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+-------------+------+-----+---------+-------+
| isbn | char(20) | NO | PRI | NULL | |
| title | varchar(50) | YES | | NULL | |
| author_id | int(11) | YES | | NULL | |
| publisher_id | int(11) | YES | | NULL | |
| year_pub | char(4) | YES | | NULL | |
| description | text | YES | | NULL | |
+--------------+-------------+------+-----+---------+-------+CREATE TABLE authors
(author_id INT AUTO_INCREMENT PRIMARY KEY,
name_last VARCHAR(50),
name_first VARCHAR(50),
country VARCHAR(50) );INSERT INTO authors
(name_last, name_first, country)
VALUES('Kafka', 'Franz', 'Czech Republic');INSERT INTO books
(title, author_id, isbn, year_pub)
VALUES('The Castle', '1', '0805211063', '1998');INSERT INTO books
(title, author_id, isbn, year_pub)
VALUES('The Trial', '1', '0805210407', '1995'),
('The Metamorphosis', '1', '0553213695', '1995'),
('America', '1', '0805210644', '1995');SELECT title
FROM books;SELECT title
FROM books
LIMIT 5;SELECT title, name_last
FROM books
JOIN authors USING (author_id);SELECT title AS 'Kafka Books'
FROM books
JOIN authors USING (author_id)
WHERE name_last = 'Kafka';
+-------------------+
| Kafka Books |
+-------------------+
| The Castle |
| The Trial |
| The Metamorphosis |
| America |
+-------------------+UPDATE books
SET title = 'Amerika'
WHERE isbn = '0805210644';DELETE FROM books
WHERE author_id = '2034';openssl enc -d -aes-256-cbc -k mypass -in backup.xb.enc | mbstream -xmariadb-backup --user=root --backup --stream=xbstream | gzip > backupstream.gzgunzip -c backupstream.gz | mbstream -xmariadb-backup --user=root --backup --stream=xbstream | gzip | openssl enc -aes-256-cbc -k mypass > backup.xb.gz.encopenssl enc -d -aes-256-cbc -k mypass -in backup.xb.gz.enc |gzip -d| mbstream -xmariadb-backup --user=root --backup --stream=xbstream | 7z a -si backup.xb.7z7z e backup.xb.7z -so |mbstream -xmariadb-backup --user=root --backup --stream=xbstream | zstd - -o backup.xb.zst -f -1zstd -d backup.xbstream.zst -c | mbstream -xmariadb-backup --user=root --backup --stream=xbstream | gpg -c --passphrase SECRET --batch --yes -o backup.xb.gpggpg --decrypt --passphrase SECRET --batch --yes backup.xb.gpg | mbstream -xDELIMITER //
CREATE PROCEDURE Reset_animal_count()
MODIFIES SQL DATA
UPDATE animal_count SET animals = 0;
//
DELIMITER ;Tables containing BLOB or TEXT fields cannot be FIXED, as by design these are both dynamic fields.
Fixed-length tables have a number of characteristics:
fast, since MariaDB will always know where a record begins
easy to cache
take up more space than dynamic tables, as the maximum amount of storage space are allocated to each record.
reconstructing after a crash is uncomplicated due to the fixed positions
no fragmentation or need to re-organize, unless records have been deleted and you want to free the space up.
Dynamic tables contain records of a variable length. It is the default format if a table has any BLOB, TEXT, VARCHAR or VARBINARY fields, and no ROW FORMAT is provided. You can also specify a DYNAMIC table with ROW_FORMAT=DYNAMIC in the table definition.
Dynamic tables have a number of characteristics
Each row contains a header indicating the length of the row.
Rows tend to become fragmented easily. UPDATING a record to be longer will likely ensure it is stored in different places on the disk.
All string columns with a length of four or more are dynamic.
They require much less space than fixed-length tables.
Restoring after a crash is more complicated than with FIXED tables.
Page format is the default format for Aria tables, and is the only format that can be used if TRANSACTIONAL=1.
Page tables have a number of characteristics:
It's cached by the page cache, which gives better random performance as it uses fewer system calls.
Does not fragment as easily as the DYNAMIC format during UPDATES. The maximum number of fragments are very low.
Updates more quickly than dynamic tables.
Has a slight storage overhead, mainly notable on very small rows
Slower to perform a full table scan
Slower if there are multiple duplicated keys, as Aria will first write a row, then keys, and only then check for duplicates
See Aria Storage Engine for the impact of the TRANSACTIONAL option on the row format.
This page is licensed: CC BY-SA / Gnu FDL
To tell it which tables to back up, you can use the --tables option.
To tell it which tables to exclude from the backup, you can provide the --tables-exclude option.
To tell it to check a file for specific tables to backup, you can provide the --tables-file option.
From a shell, unmount the snapshot with umount snapshot.
Triggers can also update data. The slave uses the DEFINER attribute to determine which user is taken to have created the trigger.
Note that the above limitations do no apply to stored procedures or to events.
This means also that CONNECT is not designed to be used by centralized servers, which are mostly used for transactions and often must run a long time without human intervening.
Performances vary a great deal depending on the table type. For instance, ODBC tables are only retrieved as fast as the other DBMS can do. If you have a lot of queries to execute, the best way to optimize your work can be sometime to translate the data from one type to another. Fortunately this is very simple with CONNECT. Fixed formats like FIX, BIN or VEC tables can be created from slower ones by commands such as:
FIX and BIN are often the better choice because the I/O functions are
done on blocks of BLOCK_SIZE rows. VEC tables can be very efficient for
tables having many columns only a few being used in each query. Furthermore,
for tables of reasonable size, the MAPPED option can very often speed up
many queries.
Be aware of the two broad kinds of CONNECT tables:
Inward
They are table whose file name is not specified at create. An empty file are given a default name (tabname.tabtype) and are populated like for other engines. They do not require the FILE privilege and can be used for testing purpose.
Outward
They are all other CONNECT tables and access external data sources or files. They are the true useful tables but require the FILE privilege.
For outward tables, the DROP TABLE statement just removes the table definition but does not erase the table data. However, dropping an inward tables also erase the table data as well.
Be careful using the ALTER TABLE statement. Currently the data compatibility is not tested and the modified definition can become incompatible with the data. In particular, Alter modifies the table definition only but does not modify the table data. Consequently, the table type should not be modified this way, except to correct an incorrect definition. Also adding, dropping or modifying columns may be wrong because the default offset values (when not explicitly given by the FLAG option) may be wrong when recompiled with missing columns.
Safe use of ALTER is for indexing, as we have seen earlier, and to change options such as MAPPED or HUGE those do not impact the data format but just the way the data file is accessed. Modifying the BLOCK_SIZE option is all right with FIX, BIN, DBF, split VEC tables; however it is unsafe for VEC tables that are not split (only one data file) because at their creation the estimate size has been made a multiple of the block size. This can cause errors if this estimate is not a multiple of the new value of the block size.
In all cases, it is safer to drop and re-create the table (outward tables) or to make another one from the table that must be modified.
CONNECT can execute these commands using two different algorithms:
It can do it in place, directly modifying rows (update) or moving rows (delete) within the table file. This is a fast way to do it in particular when indexing is used.
It can do it using a temporary file to make the changes. This is required when updating variable record length tables and is more secure in all cases.
The choice between these algorithms depends on the session variable connect_use_tempfile.
This page is licensed: GPLv2
It requires a column value, or an expression based on a column value, which is hashed, as well as the number of partitions into which to divide the table.
partitioning_expression needs to return a non-constant, deterministic integer. It is evaluated for each insert and update, so overly complex expressions can lead to performance issues. A hashing function operating on a single column, and where the value changes consistently with the column value, allows for easy pruning on ranges of partitions, and is usually a better choice. For this reason, using multiple columns in a hashing expression is not usually recommended.
number_of_partitions is a positive integer specifying the number of partitions into which to divide the table. If the PARTITIONS clause is omitted, the default number of partitions is one.
To determine which partition to use, perform the following calculation:
For example, if the expression is TO_DAYS(datetime_column) and the number of partitions is 5, inserting a datetime value of '2023-11-15' would determine the partition as follows:
TO_DAYS('2023-11-15') gives a value of 739204.
MOD(739204,5) returns 4, so the 4th partition is used.
HASH partitioning makes use of the modulus of the hashing function's value. The LINEAR HASH partitioning type is similar, using a powers-of-two algorithm. Data is more likely to be evenly distributed over the partitions than with the LINEAR HASH partitioning type; however, adding, dropping, merging and splitting partitions is much slower.
Using the Information Schema PARTITIONS Table for more information:
Partition Maintenance for suggestions on using partitions
This page is licensed: CC BY-SA / Gnu FDL
$ mariadb-backup --backup \
--target-dir=/var/mariadb/backup/ \
--databases='app1 app2' --tables='tab_[0-9]+' \
--user=mariadb-backup --password=mypasswordmariadb-backup --backup --databases="db1" \
--target-dir=/backup --history=partial_db1$ mariadb-backup --prepare --export \
--target-dir=/var/mariadb/backup/$ mariadb-backup --backup \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword$ ls /var/mariadb/backup/
aria_log.0000001 mysql xtrabackup_checkpoints
aria_log_control performance_schema xtrabackup_info
backup-my.cnf test xtrabackup_logfile
ibdata1 xtrabackup_binlog_info$ mariadb-backup --backup \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword \
--history=full_backup_weekly$ mariadb-backup --prepare \
--target-dir=/var/mariadb/backup/$ mariadb-backup --copy-back \
--target-dir=/var/mariadb/backup/$ chown -R mysql:mysql /var/lib/mysql/$ rsync -avrP /var/mariadb/backup /var/lib/mysql/
$ chown -R mysql:mysql /var/lib/mysql/mariadb-dump db_name > backup-file.sqlmariadb db_name < backup-file.sqlmariadb-hotcopy db_name [/path/to/new_directory]
mariadb-hotcopy db_name_1 ... db_name_n /path/to/new_directorySELECT * FROM animal_count;
+---------+
| animals |
+---------+
| 101 |
+---------+
CALL Reset_animal_count();
SELECT * FROM animal_count;
+---------+
| animals |
+---------+
| 0 |
+---------+CREATE PROCEDURE
Withdraw /* Routine name */
(parameter_amount DECIMAL(6,2), /* Parameter list */
parameter_teller_id INTEGER,
parameter_customer_id INTEGER)
MODIFIES SQL DATA /* Data access clause */
BEGIN /* Routine body */
UPDATE Customers
SET balance = balance - parameter_amount
WHERE customer_id = parameter_customer_id;
UPDATE Tellers
SET cash_on_hand = cash_on_hand + parameter_amount
WHERE teller_id = parameter_teller_id;
INSERT INTO Transactions VALUES (
parameter_customer_id,
parameter_teller_id,
parameter_amount);
END;SHOW PROCEDURE STATUS\G
*************************** 1. row ***************************
Db: test
Name: Reset_animal_count
Type: PROCEDURE
Definer: root@localhost
Modified: 2013-06-03 08:55:03
Created: 2013-06-03 08:55:03
Security_type: DEFINER
Comment:
character_set_client: utf8
collation_connection: utf8_general_ci
Database Collation: latin1_swedish_ciSELECT ROUTINE_NAME FROM INFORMATION_SCHEMA.ROUTINES
WHERE ROUTINE_TYPE='PROCEDURE';
+--------------------+
| ROUTINE_NAME |
+--------------------+
| Reset_animal_count |
+--------------------+SHOW CREATE PROCEDURE Reset_animal_count\G
*************************** 1. row ***************************
Procedure: Reset_animal_count
sql_mode:
Create Procedure: CREATE DEFINER=`root`@`localhost` PROCEDURE `Reset_animal_count`()
MODIFIES SQL DATA
UPDATE animal_count SET animals = 0
character_set_client: utf8
collation_connection: utf8_general_ci
Database Collation: latin1_swedish_ciDROP PROCEDURE Reset_animal_count();DELIMITER //
CREATE FUNCTION trust_me(x INT)
RETURNS INT
DETERMINISTIC
READS SQL DATA
BEGIN
RETURN (x);
END //
DELIMITER ;DELIMITER //
CREATE FUNCTION dont_trust_me()
RETURNS INT
BEGIN
RETURN UUID_SHORT();
END //
DELIMITER ;PARTITION BY RANGE COLUMNS (col1, col2, ...)
(
PARTITION partition_name VALUES LESS THAN (value1, value2, ...),
[ PARTITION partition_name VALUES LESS THAN (value1, value2, ...), ... ]
)PARTITION BY LIST COLUMNS (partitioning_expression)
(
PARTITION partition_name VALUES IN (value1, value2, ...),
[ PARTITION partition_name VALUES IN (value1, value2, ...), ... ]
[ PARTITION partititon_name DEFAULT ]
)CREATE OR REPLACE TABLE t1 (
date1 DATE NOT NULL,
date2 DATE NOT NULL
)
ENGINE = InnoDB
PARTITION BY RANGE COLUMNS (date1,date2) (
PARTITION p0 VALUES LESS THAN ('2013-01-01', '1994-12-01'),
PARTITION p1 VALUES LESS THAN ('2014-01-01', '1995-12-01'),
PARTITION p2 VALUES LESS THAN ('2015-01-01', '1996-12-01')
);CREATE OR REPLACE TABLE t1 (
num TINYINT(1) NOT NULL
)
ENGINE = InnoDB
PARTITION BY LIST COLUMNS (num) (
PARTITION p0 VALUES IN (0,1),
PARTITION p1 VALUES IN (2,3),
PARTITION p2 DEFAULT
);CREATE TABLE fastable table_specs SELECT * FROM slowtable;PARTITION BY HASH (partitioning_expression)
[PARTITIONS(number_of_partitions)]MOD(partitioning_expression, number_of_partitions)CREATE OR REPLACE TABLE t1 (c1 INT, c2 DATETIME)
PARTITION BY HASH(TO_DAYS(c2))
PARTITIONS 5;INSERT INTO t1 VALUES (1,'2023-11-15');
SELECT PARTITION_NAME,TABLE_ROWS FROM INFORMATION_SCHEMA.PARTITIONS
WHERE TABLE_SCHEMA='test' AND TABLE_NAME='t1';
+----------------+------------+
| PARTITION_NAME | TABLE_ROWS |
+----------------+------------+
| p0 | 0 |
| p1 | 0 |
| p2 | 0 |
| p3 | 0 |
| p4 | 1 |
+----------------+------------+NoSQL new data type MONGO accessing MongoDB collections as relational tables.
Read/Write access to external files of most commonly used formats.
Direct access to most external data sources via ODBC, JDBC and MySQL or MongoDB API.
Only used columns are retrieved from external scan.
Push-down WHERE clauses when appropriate.
Support of special and virtual columns.
Parallel execution of multi-table tables (currently unavailable).
Supports partitioning by sub-files or by sub-tables (enabling table sharding).
Support of MRR for SELECT, UPDATE and DELETE.
Provides remote, block, dynamic and virtual indexing.
Can execute complex queries on remote servers.
Provides an API that allows writing additional FDW in C++.
|For the examples in this article, we will assume that a fictitious client's data was in Excel and that the exported text file are named prospects.txt. It contains contact information about prospective customers for the client's sales department, located on the client's intranet site. The data is to be imported into a MariaDB table called prospect_contact, in a database called sales_dept. To make the process simpler, the order and number of columns in MS Excel ® (the format of the data provided by the client) should be the same as the table into which the data is going to be imported. So if prospect_contact has columns that are not included in the spreadsheet, one would make a copy of the spreadsheet and add the missing columns and leave them blank. If there are columns in the spreadsheet that aren't in prospect_contact, one would either add them to the MariaDB table, or, if they're not to be imported, one would delete the extra columns from the spreadsheet. One should also delete any headings and footnotes from the spreadsheet. After this is completed then the data can be exported. Since this is Unix Review, we'll skip how one would export data in Excel and assume that the task was accomplished easily enough using its export wizard.
The next step is to upload the data text file to the client's web site by FTP. It should be uploaded in ASCII mode. Binary mode may send binary hard-returns for row-endings. Also, it's a good security habit to upload data files to non-public directories. Many web hosting companies provide virtual domains with a directory like /public_html, which is the document root for the Apache web server; it typically contains the site's web pages. In such a situation, / is a virtual root containing logs and other files that are inaccessible to the public. We usually create a directory called tmp in the virtual root directory to hold data files temporarily for importing into MariaDB. Once that's done, all that's required is to log into MariaDB with the mariadb client as an administrative user (if not root, then a user with FILE privileges), and run the proper SQL statement to import the data.
The LOAD DATA INFILE statement is the easiest way to import data from a plain text file into MariaDB. Below is what one would enter in the mariadb client to load the data in the file called prospects.txt into the table prospect_contact:
Before entering the statement above, the MariaDB session would, of course, be switched to the sales_dept database with a USE statement. It is possible, though, to specify the database along with the table name (e.g., sales_dept.prospect_contact). If the server is running Windows, the forward slashes are still used for the text file's path, but a drive may need to be specified at the beginning of the path: 'c:/tmp/prospects.txt'. Notice that the SQL statement above has | as the field delimiter. If the delimiter was [TAB]—which is common—then one would replace | with here. A line-feed () isn't specified as the record delimiter since it's assumed. If the rows start and end with something else, though, then they will need to be stated. For instance, suppose the rows in the text file start with a double-quote and end with a double-quote and a Windows hard-return (i.e., a return and a line-feed). The statement would need to read like this:
Notice that the starting double-quote is inside of single-quotes. If one needs to specify a single-quote as the start of a line, one could either put the one single-quote within double-quotes or one could escape the inner single-quote with a back-slash, thus telling MariaDB that the single-quote that follows is to be taken literally and is not part of the statement, per se:
If the table prospect_contact already contains some of the records that are about to be imported from prospects.txt (that is to say, records with the same primary key), then a decision should be made as to what MariaDB is to do about the duplicates. The SQL statement, as it stands above, will cause MariaDB to try to import the duplicate records and to create duplicate rows in prospect_contact for them. If the table's properties are set not to allow duplicates, then MariaDB will kick out errors. To get MariaDB to replace the duplicate existing rows with the ones being imported in, one would add the REPLACE just before the INTO TABLE clause like this:
To import only records for prospects that are not already in prospect_contact, one would substitute REPLACE with the IGNORE flag. This instructs MariaDB to ignore records read from the text file that already exist in the table.
For importing data into a table while it's in use, table access needs to be addressed. If access to the table by other users may not be interrupted, then a LOW_PRIORITY flag can be added to the LOAD DATA INFILE statement. This tells MariaDB that the loading of this data is a low priority. One would only need to change the first line of the SQL statement above to set its priority to low:
If the LOW_PRIORITY flag isn't included, the table are locked temporarily during the import and other users are prevented from accessing it.
I mentioned earlier that uploading of the text file should not be done in binary mode so as to avoid the difficulties associated with Windows line endings. If this is unavoidable, however, there is an easy way to import binary row-endings with MariaDB. One would just specify the appropriate hexadecimals for a carriage-return combined with a line-feed (i.e., CRLF) as the value of TERMINATED BY:
Notice that there are intentionally no quotes around the binary value. If there were, MariaDB would take the value for text and not a binary code. The semi-colon is not part of the value; it's the SQL statement terminator.
Earlier we also stated that the first row in the spreadsheet containing the column headings should be deleted before exporting to avoid the difficulty of importing the headings as a record. It's actually pretty easy to tell MariaDB to just skip the top line. One would add the following line to the very end of the LOAD DATA INFILE statement:
The number of lines for MariaDB to ignore can, of course, be more than one.
Another difficulty arises when some Windows application wizards export data with each field surrounded by double-quotes, as well as around the start and end of records. This can be a problem when a field contains a double-quote. To deal with this, some applications use back-slash () to escape embedded double-quotes, to indicate that a particular double-quote is not a field ending but part of the field's content. However, some applications will use a different character (like a pound-sign) to escape embedded quotes. This can cause problems if MariaDB isn't prepared for the odd escape-character. MariaDB will think the escape character is actually text and the embedded quote-mark, although it's escaped, is a field ending. The unenclosed text that follows are imported into the next column and the remaining columns are one column off, leaving the last column not imported. As maddening as this can be, it's quite manageable in MariaDB by adding an ENCLOSED BY and an ESCAPED BY clause:
In the Foreign Data Basics section above, we said that the columns in the spreadsheet should be put in the same order and quantity as the receiving table. This really isn't necessary if MariaDB is cued in as to what it should expect. To illustrate, let's assume that prospect_contact has four columns in the following order: row_id, name_first, name_last, telephone. Whereas, the spreadsheet has only three columns, differently named, in this order: Last Name, First Name, Telephone. If the spreadsheet isn't adjusted, then the SQL statement will need to be changed to tell MariaDB the field order:
This SQL statement tells MariaDB the name of each table column associated with each spreadsheet column in the order that they appear in the text file. From there it will naturally insert the data into the appropriate columns in the table. As for columns that are missing like row_id, MariaDB will fill in those fields with the default value if one has been supplied in the table's properties. If not, it will leave the field as NULL. Incidentally, we slipped in the binary [TAB] (0x09) as a field delimiter.
For some clients and for certain situations it may be of value to be able to import data into MariaDB without using the mariadb client. This could be necessary when constructing a shell script to import text files on an automated, regular schedule. To accomplish this, the mariadb-import (mysqlimport before ) utility may be used as it encompasses the LOAD DATA INFILE statement and can easily be run from a script. So if one wants to enter the involved SQL statement at the end of the last section above, the following could be entered from the command-line (i.e., not in the mariadb client):
Although this statement is written over several lines here, it either has to be on the same line when entered or a space followed by a back-slash has to be entered at the end of each line (as seen here) to indicate that more follows. Since the above is entered at the command-line prompt, the user isn't logged into MariaDB. Therefore the first line contains the user name and password for mariadb-import to give to MariaDB. The password itself is optional, but the directive --password (without the equal sign) isn't. If the password value is not given in the statement, then the user are prompted for it. Notice that the order of directives doesn't matter after the initial command, except that the database and file name go last. Regarding the file name, its prefix must be the same as the table—the dot and the extension are ignored. This requires that prospects.txt be renamed to prospect_contact.txt. If the file isn't renamed, then MariaDB would create a new table called prospects and the --replace option would be pointless. After the file name, incidentally, one could list more text files, separated by a space, to process using mariadb-import. We've added the --verbose directive so as to be able to see what's going on. One probably would leave this out in an automated script. By the way, --low-priority and --ignore-lines are available.
Some web hosting companies do not allow the use of LOAD DATA INFILE or mariadb-import statements due to security vulnerabilities in these statements for them. To get around this, some extra steps are necessary to avoid having to manually enter the data one row at a time. First, one needs to have MariaDB installed on one's local workstation. For simplicity, we'll assume this is done and is running Linux on the main partition and MS Windows® on an extra partition. Recapping the on-going example of this article based on these new circumstances, one would boot up into Windows and start MS Excel®, load the client's spreadsheet into it and then run the export wizard as before—saving the file prospects.txt to the 'My Documents' directory. Then one would reboot into Linux and mount the Windows partition and copy the data text file to /tmp in Linux, locally. Next one would log into the local (not the client's) MariaDB server and import the text file using a LOAD DATA INFILE as we've extensively outline above. From there one would exit MariaDB and export the data out of MariaDB using the mariadb-dump utility locally, from the command-line like this:
This creates an interesting text file complete with all of the SQL commands necessary to insert the data back into MariaDB one record, one INSERT at a time. When you run mariadb-import, it's very educational to open up it in a text editor to see what it generates.
After creating this table dump, one would upload the resulting file (in ASCII mode) to the /tmp directory on the client's web server. From the command prompt on the client's server one would enter the following:
This line along with the mariadb-dump line show above are simple approaches. Like the Windows application wizard, with mariadb-dump one can specify the format of the output file and several other factors. One important factor related to the scenario used in this article is the CREATE TABLE statement that are embedded in the mariadb-dump output file. This will fail and kick out an error because of the existing table prospect_contact in the client's database. To limit the output to only INSERT statements and no CREATE TABLE statements, the mariadb-dump line would look like this:
Notice that we've used acceptable abbreviations for the user name and the password directives. Since the password was given here, the user are prompted for it.
The mariadb-dump utility usually works pretty well. However, one feature it's lacking at this time is a REPLACE flag as is found in the LOAD DATA INFILE statement and with the mariadb-import tool. So if a record already exists in the prospect_contact, it won't be imported. Instead it will kick out an error message and stop at that record, which can be a mess if one has imported several hundred rows and have several hundred more to go. One easy fix for this is to open up prospects.sql in a text editor and do a search on the word INSERT and replace it with REPLACE. The syntax of both of these statements are the same, fortunately. So one would only need to replace the keyword for new records to be inserted and for existing records to be replaced.
It's always amazing to me how much can be involved in the simplest of statements in MariaDB. MariaDB is deceptively powerful and feature rich. One can keep the statements pretty minimal or one can develop a fairly detailed, single statement to allow for accuracy of action. There are many other aspects of importing data into MariaDB that we did not address—in particular dealing with utilities. We also didn't talk about the Perl modules that could be used to convert data files. These can be useful in scripting imports. There are many ways in which one can handle importing data. Hopefully, this article has presented most of the basics and pertinent advanced details that may be of use to most MariaDB developers.
This page is licensed: CC BY-SA / Gnu FDL
Connect to mysqld instance, find out important variables (datadir, InnoDB pagesize, encryption keys, encryption plugin etc)
Scan the database directory, datadir, looking for InnoDB tablespaces, load the tablespaces (basically, it is an “open” in InnoDB sense)
If --lock-ddl-per-table is used:
Do MDL locks, for InnoDB tablespaces that we want to copy. This is to ensure that there are no ALTER, RENAME , TRUNCATE or DROP TABLE on any of the tables that we want to copy.
This is implemented with:
If lock-ddl-per-table is not done, then mariadb-backup would have to know all tables that were created or altered during the backup. See MDEV-16791.
Start a dedicated thread in mariadb-backup to copy InnoDB redo log (ib_logfile*).
This is needed to record all changes done while the backup is running. (The redo log logically is a single circular file, split into innodb_log_files_in_group files.)
The log is also used to see detect if any truncate or online alter tables are used.
The assumption is that the copy thread are able to keep up with server. It should always be able keep up, if the redo log is big enough.
Copy all selected tablespaces, file by file, in dedicated threads in mariadb-backup without involving the mysqld server.
This is special “careful” copy, it looks for page-level consistency by checking the checksum.
The files are not point-in-time consistent as data may change during copy.
The idea is that InnoDB recovery would make it point-in-time consistent.
Execute FLUSH TABLE WITH READ LOCK. This is default, but may be omitted with the -–no-lock parameter. The reason why FLUSH is needed is to ensure that all tables are in a consistent state at the exact same point in time, independent of storage engine.
If --lock-ddl-per-table is used and there is a user query waiting for MDL, the user query are killed to resolve a deadlock. Note that these are only queries of type ALTER, DROP, TRUNCATE or RENAME TABLE. ()
Copy .frm, MyISAM, Aria and other storage engine files.
If MyRocks is used, create rocksdb checkpoint via the set rocksdb_create_checkpoint=$rocksdb_data_dir/mariadb-backup_rocksdb_checkpoint command. The result of it is a directory with hardlinks to MyRocks files. Copy the checkpoint directory to the backup (or create hardlinks in backup directory is on the same partition as data directory). Remove the checkpoint directory.
Copy tables that were created while the backup was running and do rename files that were changed during backup (since ).
Copy the rest of InnoDB redo log, stop redo-log-copy thread.
Write some metadata info (binlog position).
If FLUSH TABLE WITH READ LOCK was done:
execute: UNLOCK TABLES
If --lock-ddl-per-table was done:
execute COMMIT
If log tables exists:
Take MDL lock for log tables
Copy part of log tables that wasn't copied before
Unlock log tables
If FLUSH TABLE WITH READ LOCK is not used, only InnoDB tables are consistent (not the privilege tables in the mysql database or the binary log). The backup point depends on the content of the redo log within the backup itself.
This page is licensed: CC BY-SA / Gnu FDL
The socket file can be in a non-standard path. In this case, the socket option is probably written in the my.cnf file. Check that its value is identical in the [mysqld] and [client] sections; if not, the client will look for a socket in a wrong place.
If unsure where the Unix socket file is running, it's possible to find this out, for example:
Usually, the MariaDB server does not by default accept connections from a remote client or connecting with tcp and a hostname and has to be configured to permit these.
To solve this, see Configuring MariaDB for Remote Client Access
The unix_socket authentication plugin is enabled by default on Unix-like systems. This uses operating system credentials when connecting to MariaDB via the local Unix socket file. See unix_socket authentication plugin for instructions on connecting and on switching to password-based authentication as well as Authentication for an overview.
Authentication is granted to a particular username/host combination. user1'@'localhost', for example, is not the same as user1'@'166.78.144.191'. See the GRANT article for details on granting permissions.
Passwords are hashed with PASSWORD function. If you have set a password with the SET PASSWORD statement, the PASSWORD function must be used at the same time. For example, SET PASSWORD FOR 'bob'@'%.loc.gov' = PASSWORD('newpass') rather than just SET PASSWORD FOR 'bob'@'%.loc.gov' = 'newpass' .
If you can run regular queries but get an authentication error when running the SELECT ... INTO OUTFILE, SELECT ... INTO DUMPFILE or LOAD DATA INFILE statements, you do not have permission to write files to the server. This requires the FILE privilege. See the GRANT article.
If you can connect to the server, but not to a database, for example:
or can connect to a particular database, but not another, for examplemariadb -uname -p -u name db1 works but not mariadb -uname -p -u name db2, you have not been granted permission for the particular database. See the GRANT article.
It's possible that option files or environment variables may be providing incorrect connection parameters. Check the values provided in any option files read by the client you are using (see mariadbd Configuration Files and Groups and the documentation for the particular client you're using - see Clients and Utilities).
Option files can usually be suppressed with no-defaults option, for example:
If you are unable to connect to a server, for example because you have lost the root password, you can start the server without using the privilege tables by running the --skip-grant-tables option, which gives users full access to all tables. You can then run FLUSH PRIVILEGES to resume using the grant tables, followed by SET PASSWORD to change the password for an account.
You may have created a user with something like:
This creates a user with the '%' wildcard host.
However, you may still be failing to login from localhost. Some setups create anonymous users, including localhost. So, the following records exist in the user table:
Since you are connecting from localhost, the anonymous credentials, rather than those for the 'melisa' user, are used. The solution is either to add a new user specific to localhost, or to remove the anonymous localhost user.
CC BY-SA / Gnu FDL
Next, the function body is placed between BEGIN and END statements. It declares a tinyint, X, which is simply set to 42, and this is the result returned.
Of course, a function that doesn't take any arguments is of little use. Here's a more complex example:
This function takes an argument, price which is defined as a DECIMAL, and returns an INT.
Take a look at the CREATE FUNCTION page for more details.
It is also possible to create stored aggregate functions.
To find which stored functions are running on the server, use SHOW FUNCTION STATUS:
Alternatively, query the routines table in the INFORMATION_SCHEMA database directly:
To find out what the stored function does, use SHOW CREATE FUNCTION:
To drop a stored function, use the DROP FUNCTION statement.
To change the characteristics of a stored function, use ALTER FUNCTION. Note that you cannot change the parameters or body of a stored function using this statement; to make such changes, you must drop and re-create the function using DROP FUNCTION and CREATE FUNCTION.
See the article Stored Routine Privileges.
This page is licensed: CC BY-SA / Gnu FDL
DROP DATABASE is used to completely destroy an existing database.
USE is used to select a default database.
CREATE TABLE is used to create a new table, which is where your data is actually stored.
ALTER TABLE is used to modify an existing table's definition.
DROP TABLE is used to completely destroy an existing table.
DESCRIBE shows the structure of a table.
SELECT is used when you want to read (or select) your data.
INSERT is used when you want to add (or insert) new data.
UPDATE is used when you want to change (or update) existing data.
DELETE is used when you want to remove (or delete) existing data.
is used when you want to add or change (or replace) new or existing data.
is used when you want to empty (or delete) all data from the template.
START TRANSACTION is used to begin a transaction.
COMMIT is used to apply changes and end transaction.
ROLLBACK is used to discard changes and end transaction.
The first version of this article was copied, with permission, from Basic_SQL_Statements on 2012-10-05.
This page is licensed: CC BY-SA / Gnu FDL
The first step is to simply take and prepare a fresh full backup of a database server in the replication topology. If the source database server is the desired replication primary, then we do not need to add any additional options when taking the full backup. For example:
If the source database server is a replica of the desired primary, then we should add the --slave-info option, and possibly the --safe-slave-backup option. For example:
And then we would prepare the backup as you normally would. For example:
Once the backup is done and prepared, we can copy it to the new replica. For example:
At this point, we can restore the backup to the datadir, as you normally would. For example:
And adjusting file permissions, if necessary:
Before the new replica can begin replicating from the primary, we need to create a user account on the primary that the replica can use to connect, and we need to grant the user account the REPLICATION SLAVE privilege. For example:
Before we start the server on the new replica, we need to configure it. At the very least, we need to ensure that it has a unique server_id value. We also need to make sure other replication settings are what we want them to be, such as the various GTID system variables, if those apply in the specific environment.
Once configuration is done, we can start the MariaDB Server process on the new replica.
At this point, we need to get the replication coordinates of the primary from the original backup directory.
If we took the backup on the primary, then the coordinates are in the xtrabackup_binlog_info file. If we took the backup on another replica and if we provided the --slave-info option, then the coordinates are in the file xtrabackup_slave_info file.
mariadb-backup dumps replication coordinates in two forms: GTID coordinates and binary log file and position coordinates, like the ones you would normally see from SHOW MASTER STATUS output. We can choose which set of coordinates we would like to use to set up replication.
For example:
Regardless of the coordinates we use, we will have to set up the primary connection using CHANGE MASTER TO and then start the replication threads with START SLAVE.
If we want to use GTIDs, then we will have to first set gtid_slave_pos to the GTID coordinates that we pulled from either the xtrabackup_binlog_info file or the xtrabackup_slave_info file in the backup directory. For example:
And then we would set MASTER_USE_GTID=slave_pos in the CHANGE MASTER TO statement. For example:
If we want to use the binary log file and position coordinates, then we would set MASTER_LOG_FILE and MASTER_LOG_POS in the CHANGE MASTER TO statement to the file and position coordinates that we pulled; either the xtrabackup_binlog_info file or the xtrabackup_slave_info file in the backup directory, depending on whether the backup was taken from the primary or from a replica of the primary. For example:
We should be done setting up the replica now, so we should check its status with SHOW SLAVE STATUS. For example:
This page is licensed: CC BY-SA / Gnu FDL
Although the plugin's shared library is distributed with MariaDB by default, the plugin is not actually installed by MariaDB by default. There are two methods that can be used to install the plugin with MariaDB.
The first method can be used to install the plugin without restarting the server. You can install the plugin dynamically by executing INSTALL SONAME or INSTALL PLUGIN:
The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the --plugin-load or the --plugin-load-add options. This can be specified as a command-line argument to mysqld or it can be specified in a relevant server option group in an option file:
You can uninstall the plugin dynamically by executing UNINSTALL SONAME or UNINSTALL PLUGIN:
If you installed the plugin by providing the --plugin-load or the --plugin-load-add options in a relevant server option group in an option file, then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.
Data is compressed with zlib as it is inserted, making it very small.
Data is slow the select, as it needs to be uncompressed, and, besides the query cache, there is no cache.
Supports AUTO_INCREMENT (since MariaDB/MySQL 5.1.6), which can be a unique or a non-unique index.
Since MariaDB/MySQL 5.1.6, selects scan past BLOB columns unless they are specifically requested, making these queries much more efficient.
Does not support data types.
Does not support .
Does not support foreign keys.
Does not support .
No storage limit.
Supports row locking.
Supports , and the server can access ARCHIVE tables even if the corresponding .frm file is missing.
and can be used to compress the table in its entirety, resulting in slightly better compression.
With MariaDB, it is possible to upgrade from the MySQL 5.0 format without having to dump the tables.
is supported.
Running many SELECTs during the insertions can deteriorate the compression, unless only multi-rows INSERTs and INSERT DELAYED are used.
Oracle PL/SQL compatibility using SQL/PL is provided
Stored aggregate functions were a project by Varun Gupta.
First a simplified example:
A non-trivial example that cannot easily be rewritten using existing functions:
This uses the same marks table as created above.
This page is licensed: CC BY-SA / Gnu FDL
The BLACKHOLE storage engine discards all data written to it but records operations in the binary log, useful for replication filtering and testing.
The BLACKHOLE storage engine accepts data but does not store it and always returns an empty result.
A table using the BLACKHOLE storage engine consists of a single .frm table format file, but no associated data or index files.
This storage engine can be useful, for example, if you want to run complex filtering rules on a slave without incurring any overhead on a master. The master can run a BLACKHOLE storage engine, with the data replicated to the slave for processing.
Although the plugin's shared library is distributed with MariaDB by default, the plugin is not actually installed by MariaDB by default. There are two methods that can be used to install the plugin with MariaDB.
The first method can be used to install the plugin without restarting the server. You can install the plugin dynamically by executing or :
The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the or the options. This can be specified as a command-line argument to or it can be specified in a relevant server in an :
You can uninstall the plugin dynamically by executing or :
If you installed the plugin by providing the or the options in a relevant server in an , then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.
, , and statements all work with the BLACKHOLE storage engine. However, no data changes are actually applied.
If the binary log is enabled, all SQL statements are logged as usual, and replicated to any slave servers. However, since rows are not stored, it is important to use statement-based rather than the row or mixed format, as and statements are neither logged nor replicated. See .
Some work with the BLACKHOLE storage engine.
BEFORE for statements are still activated.
for and statements are not activated.
with the FOR EACH ROW clause do not apply, since the tables have no rows.
Foreign keys are not supported. If you convert an table to BLACKHOLE, then the foreign keys will disappear. If you convert the same table back to InnoDB, then you will have to recreate them.
If you convert an table which contains to BLACKHOLE, then it produces an error.
Because a BLACKHOLE table does not store data, it will not maintain the value. If you are replicating to a table that can handle AUTO_INCREMENT columns, and are not explicitly setting the primary key auto-increment value in the query, or using the statement, inserts will fail on the slave due to duplicate keys.
The maximum key size is:
3500 bytes (>= , , , and )
1000 bytes (<= , , , and ).
This page is licensed: CC BY-SA / Gnu FDL
The DBMS_OUTPUT plugin provides Oracle-compatible output buffering functions (like PUT_LINE), allowing stored procedures to send messages to the client.
Oracle documentation describing DBMS_OUTPUT can be found here:
The main idea of DBMS_OUTPUT is:
Messages submitted by DBMS_OUTPUT.PUT_LINE() are not sent to the client until the sending subprogram (or trigger) completes. There is no a way to flush output during the execution of a procedure.
Therefore, lines are collected into a server side buffer, which, at the end of the current user statement, can be fetched to the client side using another SQL statement. Then, they can be read using a regular MariaDB Connector-C API. No changes in the client-protocol are needed.
Oracle's SQLPlus uses the procedure DBMS_PACKAGE.GET_LINES() to fetch the output to the client side as an array of strings.
MariaDB implements all routines supported by Oracle, except GET_LINES():
Procedure ENABLE() - enable the routines.
Procedure DISABLE() - disable the routines. If the package is disabled, all calls to subprograms, such as PUT() and PUT_LINE(), are ignored (or exit immediately without doing anything).
Procedure PUT_LINE()
The package starts in disabled mode, so an explicit enabling is needed:
If a call for GET_LINE or GET_LINES did not retrieve all lines, then a subsequent call for PUT, PUT_LINE, or NEW_LINE discards the remaining lines (to avoid confusing with the next message). This script demonstrates the principle:
Oracle uses this data type as a storage for the buffer:
Like Oracle, MariaDB uses an associative array as a storage for the buffer.
This functionality is not implemented.
In Oracle, the function GET_LINES() returns an array of strings of this data type:
MariaDB does not have array data types in the C and C++ connectors, so they can't take advantage of GET_LINES() in a client program.
Fetching all lines in a PL/SQL program is implemented using a loop of sys.DBMS_OUTPUT.GET_LINE() calls:
Fetching all lines on the client side (for instance, in a C program using Connector/C) is done by using a loop of sys.DBMS_OUTPUT.GET_LINE() queries.
Oracle has the following limits:
The maximum individual line length (sent to DBMS_OUTPUT) is 32767 bytes.
The default buffer size is 20000 bytes. The minimum size is 2000 bytes. The maximum is unlimited.
MariaDB also implements some limits, either using the total size of all rows or using the row count.
Like other bootstrap scripts, the script creating DBMS_OUTPUT:
Is put into a new separate /scripts/dbms_ouput.sql file in the source directory;
Is installed into /share/dbms_ouput.sql of the installation directory.
\
The CONNECT storage engine has been deprecated.
This storage engine has been deprecated.
CONNECT supports MariaDB virtual and persistent columns. It is also possible to declare a column as being a CONNECT special column. Let us see on an example how this can be done. The boys table we have seen previously can be recreated as:
We have defined two CONNECT special columns. You can give them any name; it is the field SPECIAL option that specifies the special column functional name.
Note: the default values specified for the special columns do not mean anything. They are specified just to prevent getting warning messages when inserting new rows.
For the definition of the agehired virtual column, no CONNECT options can be specified as it has no offset or length, not being stored in the file.
The command:
will return:
Existing special columns are listed in the following table:
Note: CONNECT does not currently support auto incremented columns. However,
a ROWID special column will do the job of a column auto incremented by 1.
This page is licensed: GPLv2
This page is licensed: CC BY-SA / Gnu FDL
Explore MariaDB's built-in string functions for formatting, extracting, and manipulating text data within your queries.
MariaDB has many built-in functions that can be used to manipulate strings of data. With these functions, one can format data, extract certain characters, or use search expressions. Good developers should be aware of the string functions that are available. Therefore, in this article we will go through several string functions, grouping them by similar features, and provide examples of how they might be used.
There are several string functions that are used to format text and numbers for nicer display. A popular and very useful function for pasting together the contents of data fields with text is the function. As an example, suppose that a table called contacts has a column for each sales contact's first name and another for the last name. The following SQL statement would put them together:
This statement will display the first name, a space, and then the last name together in one column. The AS clause will change the column heading of the results to Name.
Learn the various parameters and options for connecting to a MariaDB server using the command-line client and other tools.
This article covers connecting to MariaDB and the basic connection parameters. If you are completely new to MariaDB, take a look at first.
In order to connect to the MariaDB server, the client software must provide the correct connection parameters. The client software will most often be the , used for entering statements from the command line, but the same concepts apply to any client, such as a , a client to run backups such as , etc. The rest of this article assumes that the mariadb command line client is used.
If a connection parameter is not provided, it will revert to a default value.
For example, to connect to MariaDB using only default values with the mariadb client, enter the following from the command line:
In this case, the following defaults apply:
The host name is
Configure MariaDB to accept remote connections by adjusting the bind-address directive and granting appropriate user privileges.
Some MariaDB packages bind MariaDB to 127.0.0.1 (the loopback IP address) by default as a security measure using the configuration directive. Old MySQL packages sometimes disabled TCP/IP networking altogether using the directive. Before going in to how to configure these, let's explain what each of them actually does:
is fairly simple. It just tells MariaDB to run without any of the TCP/IP networking options.
requires a little bit of background information. A given server usually has at least two networking interfaces (although this is not required) and can easily have more. The two most common are a Loopback network device and a physical Network Interface Card (NIC) which allows you to communicate with the network. MariaDB is bound to the loopback interface by default because it makes it impossible to connect to the TCP port on the server from a remote host (the bind-address must refer to a local IP address, or you will receive a fatal error and MariaDB will not start). This of course is not desirable if you want to use the TCP port from a remote host, so you must remove this bind-address directive or replace it either
This guide explains various methods for copying tables between MariaDB databases and servers, including using FLUSH TABLES FOR EXPORT and mysqldump.
With MariaDB it's very easy to copy tables between different MariaDB databases and different MariaDB servers. This works for tables created with the , , , , , , and engines.
The normal procedures to copy a table is:
The table files can be found in /databasename (you can executeSELECT @@datadir to find the correct directory).
When copying the files, you should copy all files with the same
table_name + various extensions. For example, for an Aria table of
name foo, you will have files foo.frm, foo.MAI, foo.MAD and possibly
foo.TRG if you have .
If one wants to distribute a table to a user that doesn't need write access to the table and one wants to minimize the storage size of the table, the recommended engine to use is Aria or MyISAM as one can pack the table with or respectively to make it notablly smaller. MyISAM is the most portable format as it's not dependent on whether the server settings are different. Aria and InnoDB require the same block size on both servers.
An introduction to MariaDB's pluggable storage engine architecture, highlighting key engines like InnoDB, MyISAM, and Aria for different workloads.
Understand backup locking stages. This page explains how mariadb-backup uses BACKUP STAGE commands to minimize locking during operation.
The commands are a set of commands to make it possible to make an efficient external backup tool. How mariadb-backup uses these commands depends on whether you are using the version that is bundled with MariaDB Community Server or the version that is bundled with MariaDB Enterprise Server.
BACKUP STAGE Commands in MariaDB Community ServerThe commands are supported. However, the version of mariadb-backup that is bundled with MariaDB Community Server does not yet use the BACKUP STAGE
Discover techniques to optimize your backup processes, including multithreading, incremental backups, and leveraging storage snapshots.
The CONNECT storage engine has been deprecated.
This storage engine has been deprecated.
The CONNECT storage engine enables MariaDB to access external local or remote data (MED). This is done by defining tables based on different data types, in particular files in various formats, data extracted from other DBMS or products (such as Excel or MongoDB) via ODBC or JDBC, or data retrieved from the environment (for example DIR, WMI, and MAC tables)
This storage engine supports table partitioning, MariaDB virtual columns and permits defining special columns such as ROWID, FILEID, and SERVID.
The storage engine must be installed before it can be used.
The RANGE partitioning type assigns rows to partitions based on whether column values fall within contiguous, non-overlapping ranges.
The RANGE partitioning type is used to assign each partition a range of values generated by the partitioning expression. Ranges must be ordered, contiguous and non-overlapping. The minimum value is always included in the first range. The highest value may or may not be included in the last range.
A variant of this partitioning method, , allows us to use multiple columns and more datatypes.
The last part of a statement can be definition of the new table's partitions. In the case of RANGE
The CONNECT storage engine has been deprecated.
This storage engine has been deprecated.
Directly using external (file) data has many advantages, such as to work on “fresh” data produced for instance by cash registers, telephone switches, or scientific apparatus. However, you may want in some case to import external data into your MariaDB database. This is extremely simple and flexible using the CONNECT handler. For instance, let us suppose you want to import the data of the xsample.xml XML file previously given in example into a table called biblio belonging to the connect database. All you have to do is to create it by:
This last statement creates the table and inserts the original XML data, translated to tabular format by the xsampall2 CONNECT table, into the MariaDB biblio table. Note that further transformation on the data could have been achieved by using a more elaborate Select statement in the Create statement, for instance using filters, alias or applying functions to the data. However, because the Create Table process copies table data, later modifications of the
A brief history of the naming of the Aria storage engine, explaining its origins as "Maria" and the reasons for the eventual name change.
The storage engine used to be called Maria. This page gives the history and background of how and why this name was changed to Aria.
When starting what became the MariaDB project, Monty and the initial developers only planned to work on a next generation storage engine replacement. This storage engine would be crash safe and eventually support transactions. Monty named the storage engine, and the project, after his daughter, Maria.
Work began in earnest on the Maria storage engine but the plans quickly expanded and morphed and soon the developers were not just working on a storage engine, but on a complete branch of the MySQL database. Since the project was already called Maria, it made sense to call the whole database server MariaDB.
The CONNECT storage engine has been deprecated.
This storage engine has been deprecated.
Note: You can download a (1.7.0003).
This page is licensed: CC BY-SA / Gnu FDL
LOAD DATA INFILE '/tmp/prospects.txt'
INTO TABLE prospect_contact
FIELDS TERMINATED BY '|';LOAD DATA INFILE '/tmp/prospects.txt'
INTO TABLE prospect_contact
FIELDS TERMINATED BY '|'
LINES STARTING BY '"'
TERMINATED BY '"\r\n';...
LINES STARTING BY '\''
...LOAD DATA INFILE '/tmp/prospects.txt'
REPLACE INTO TABLE prospect_contact
FIELDS TERMINATED BY '|'
LINES STARTING BY '"'
TERMINATED BY '"\n';LOAD DATA LOW_PRIORITY INFILE '/tmp/prospects.txt'
......
TERMINATED BY 0x0d0a;...
IGNORE 1 LINES;LOAD DATA LOW_PRIORITY INFILE '/tmp/prospects.txt'
REPLACE INTO TABLE prospect_contact
FIELDS TERMINATED BY '"'
ENCLOSED BY '"' ESCAPED BY '#'
LINES STARTING BY '"'
TERMINATED BY '"\n'
IGNORE 1 LINES;LOAD DATA LOW_PRIORITY INFILE '/tmp/prospects.txt'
REPLACE INTO TABLE sales_dept.prospect_contact
FIELDS TERMINATED BY 0x09
ENCLOSED BY '"' ESCAPED BY '#'
TERMINATED BY 0x0d0a
IGNORE 1 LINES
(name_last, name_first, telephone);mariadb-import --user='marie_dyer' --password='angelle1207' \
--fields-terminated-by=0x09 --lines-terminated-by=0x0d0a \
--replace --low-priority --fields-enclosed-by='"' \
--fields-escaped-by='#' --ignore-lines='1' --verbose \
--columns='name_last, name_first, telephone' \
sales_dept '/tmp/prospect_contact.txt'mariadb-dump --user='root' --password='geronimo' sales_dept prospect_contact > /tmp/prospects.sqlmariadb --user='marie_dyer' --password='angelle12107' sales_dept < '/tmp/prospects.sql'mariadb-dump -u marie_dyer -p --no-create-info sales_dept prospect_contact > /tmp/prospects.sqlBEGIN
FOR EACH affected TABLE
SELECT 1 FROM <TABLE> LIMIT 0mariadb -uname -p -uname -p
ERROR 2002 (HY000): Can't connect to local MySQL server through
socket '/var/run/mysqld/mysqld.sock' (2 "No such file or directory")mariadb -uname -p --port=3307 --protocol=tcp
ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost'
(111 "Connection refused")netstat -ln | grep mysqld
unix 2 [ ACC ] STREAM LISTENING 33209505 /var/run/mysqld/mysqld.sock(/my/maria-10.4) ./client/mysql --host=myhost --protocol=tcp --port=3306 test
ERROR 2002 (HY000): Can't connect to MySQL server on 'myhost' (115)
(/my/maria-10.4) telnet myhost 3306
Trying 192.168.0.11...
telnet: connect to address 192.168.0.11: Connection refused
(/my/maria-10.4) perror 115
OS error code 115: Operation now in progressUSE test;
ERROR 1044 (42000): Access denied for user 'ian'@'localhost' to database 'test'mariadb-import --no-defaults ...CREATE USER melisa IDENTIFIED BY 'password';SELECT user,host FROM mysql.user WHERE user='melisa';
+--------+------+
| user | host |
+--------+------+
| melisa | % |
+--------+------+SELECT user,host FROM mysql.user WHERE user='melisa' OR user='';
+--------+-----------+
| user | host |
+--------+-----------+
| melisa | % |
| | localhost |
+--------+-----------+DELIMITER //
CREATE FUNCTION FortyTwo() RETURNS TINYINT DETERMINISTIC
BEGIN
DECLARE x TINYINT;
SET x = 42;
RETURN x;
END
//
DELIMITER ;SELECT FortyTwo();
+------------+
| FortyTwo() |
+------------+
| 42 |
+------------+DELIMITER //
CREATE FUNCTION VatCents(price DECIMAL(10,2)) RETURNS INT DETERMINISTIC
BEGIN
DECLARE x INT;
SET x = price * 114;
RETURN x;
END //
Query OK, 0 rows affected (0.04 sec)
DELIMITER ;SHOW FUNCTION STATUS\G
*************************** 1. row ***************************
Db: test
Name: VatCents
Type: FUNCTION
Definer: root@localhost
Modified: 2013-06-01 12:40:31
Created: 2013-06-01 12:40:31
Security_type: DEFINER
Comment:
character_set_client: utf8
collation_connection: utf8_general_ci
Database Collation: latin1_swedish_ci
1 row in set (0.00 sec)SELECT ROUTINE_NAME FROM INFORMATION_SCHEMA.ROUTINES WHERE
ROUTINE_TYPE='FUNCTION';
+--------------+
| ROUTINE_NAME |
+--------------+
| VatCents |
+--------------+SHOW CREATE FUNCTION VatCents\G
*************************** 1. row ***************************
Function: VatCents
sql_mode:
Create Function: CREATE DEFINER=`root`@`localhost` FUNCTION `VatCents`(price DECIMAL(10,2)) RETURNS int(11)
DETERMINISTIC
BEGIN
DECLARE x INT;
SET x = price * 114;
RETURN x;
END
character_set_client: utf8
collation_connection: utf8_general_ci
Database Collation: latin1_swedish_ciDROP FUNCTION FortyTwo;CREATE DATABASE mydb;
USE mydb;
CREATE TABLE mytable ( id INT PRIMARY KEY, name VARCHAR(20) );
INSERT INTO mytable VALUES ( 1, 'Will' );
INSERT INTO mytable VALUES ( 2, 'Marry' );
INSERT INTO mytable VALUES ( 3, 'Dean' );
SELECT id, name FROM mytable WHERE id = 1;
UPDATE mytable SET name = 'Willy' WHERE id = 1;
SELECT id, name FROM mytable;
DELETE FROM mytable WHERE id = 1;
SELECT id, name FROM mytable;
DROP DATABASE mydb;
SELECT COUNT(1) FROM mytable; gives the NUMBER OF records IN the TABLE$ mariadb-backup --backup \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword$ mariadb-backup --backup \
--slave-info --safe-slave-backup \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword$ mariadb-backup --prepare \
--target-dir=/var/mariadb/backup/$ rsync -avP /var/mariadb/backup dbserver2:/var/mariadb/backup$ mariadb-backup --copy-back \
--target-dir=/var/mariadb/backup/$ chown -R mysql:mysql /var/lib/mysql/CREATE USER 'repl'@'dbserver2' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'dbserver2';mariadb-bin.000096 568 0-1-2$ cat xtrabackup_binlog_info
mariadb-bin.000096 568 0-1-2SET GLOBAL gtid_slave_pos = "0-1-2";
CHANGE MASTER TO
MASTER_HOST="dbserver1",
MASTER_PORT=3306,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_USE_GTID=slave_pos;
START SLAVE;CHANGE MASTER TO
MASTER_HOST="dbserver1",
MASTER_PORT=3306,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_LOG_FILE='mariadb-bin.000096',
MASTER_LOG_POS=568;
START SLAVE;SHOW SLAVE STATUS\GINSTALL SONAME 'ha_archive';[mariadb]
...
plugin_load_add = ha_archiveUNINSTALL SONAME 'ha_archive';CREATE AGGREGATE FUNCTION function_name (parameters) RETURNS return_type
BEGIN
ALL types of declarations
DECLARE CONTINUE HANDLER FOR NOT FOUND RETURN return_val;
LOOP
FETCH GROUP NEXT ROW; // fetches next row FROM TABLE
other instructions
END LOOP;
ENDSET sql_mode=Oracle;
DELIMITER //
CREATE AGGREGATE FUNCTION function_name (parameters) RETURN return_type
declarations
BEGIN
LOOP
FETCH GROUP NEXT ROW; -- fetches next row from table
-- other instructions
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RETURN return_val;
END //
DELIMITER ;CREATE TABLE marks(stud_id INT, grade_count INT);
INSERT INTO marks VALUES (1,6), (2,4), (3,7), (4,5), (5,8);
SELECT * FROM marks;
+---------+-------------+
| stud_id | grade_count |
+---------+-------------+
| 1 | 6 |
| 2 | 4 |
| 3 | 7 |
| 4 | 5 |
| 5 | 8 |
+---------+-------------+
DELIMITER //
CREATE AGGREGATE FUNCTION IF NOT EXISTS aggregate_count(x INT) RETURNS INT
BEGIN
DECLARE count_students INT DEFAULT 0;
DECLARE CONTINUE HANDLER FOR NOT FOUND
RETURN count_students;
LOOP
FETCH GROUP NEXT ROW;
IF x THEN
SET count_students = count_students+1;
END IF;
END LOOP;
END //
DELIMITER ;DELIMITER //
CREATE AGGREGATE FUNCTION medi_int(x INT) RETURNS DOUBLE
BEGIN
DECLARE CONTINUE HANDLER FOR NOT FOUND
BEGIN
DECLARE res DOUBLE;
DECLARE cnt INT DEFAULT (SELECT COUNT(*) FROM tt);
DECLARE lim INT DEFAULT (cnt-1) DIV 2;
IF cnt % 2 = 0 THEN
SET res = (SELECT AVG(a) FROM (SELECT a FROM tt ORDER BY a LIMIT lim,2) ttt);
ELSE
SET res = (SELECT a FROM tt ORDER BY a LIMIT lim,1);
END IF;
DROP TEMPORARY TABLE tt;
RETURN res;
END;
CREATE TEMPORARY TABLE tt (a INT);
LOOP
FETCH GROUP NEXT ROW;
INSERT INTO tt VALUES (x);
END LOOP;
END //
DELIMITER ;SET sql_mode=Oracle;
DELIMITER //
CREATE AGGREGATE FUNCTION aggregate_count(x INT) RETURN INT AS count_students INT DEFAULT 0;
BEGIN
LOOP
FETCH GROUP NEXT ROW;
IF x THEN
SET count_students := count_students+1;
END IF;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RETURN count_students;
END aggregate_count //
DELIMITER ;
SELECT aggregate_count(stud_id) FROM marks;SELECT u.id, u.name, alliance.ally FROM users u JOIN alliance ON
(u.id=alliance.userId) JOIN team ON (alliance.teamId=team.teamId
WHERE team.teamName='Legionnaires' AND u.online=1 AND ((u.subscription='paid'
AND u.paymentStatus='current') OR u.subscription='free') ORDER BY u.name;SELECT
u.id
, u.name
, alliance.ally
FROM
users u
JOIN alliance ON (u.id = alliance.userId)
JOIN team ON (alliance.teamId = team.teamId
WHERE
team.teamName = 'Legionnaires'
AND u.online = 1
AND (
(u.subscription = 'paid' AND u.paymentStatus = 'current')
OR
u.subscription = 'free'
)
ORDER BY
u.name;SELECT *
FROM
financial_reportQ_1 AS a
JOIN sales_renderings AS b ON (a.salesGroup = b.groupId)
JOIN sales_agents AS c ON (b.groupId = c.group)
WHERE
b.totalSales > 10000
AND c.id != a.clientIdSELECT *
FROM
financial_report_Q_1 AS frq1
JOIN sales_renderings AS sr ON (frq1.salesGroup = sr.groupId)
JOIN sales_agents AS sa ON (sr.groupId = sa.group)
WHERE
sr.totalSales > 10000
AND sa.id != frq1.clientIdSELECT *
FROM
family,
relationships
WHERE
family.personId = relationships.personId
AND relationships.relation = 'father'SELECT *
FROM
family
JOIN relationships ON (family.personId = relationships.personId)
WHERE
relationships.relation = 'father'ERROR 1064: You have an error in your SQL syntax; check the manual that corresponds to your
MariaDB server version for the right syntax to use near ' ' at line 1SELECT * FROM someTable WHERE field = 'valueSELECT * FROM someTable WHERE field = 1 GROUP BY id,SELECT * FROM actionTable WHERE `DELETE` = 1;SELECT * FROM a, b JOIN c ON a.x = c.x;SELECT * FROM someTable WHERE someId IN (SELECT id FROM someLookupTable);SELECT * FROM tableA JOIN tableB ON tableA.x = tableB.y;CREATE TABLE boys (
linenum INT(6) NOT NULL DEFAULT 0 special=ROWID,
name CHAR(12) NOT NULL,
city CHAR(12) NOT NULL,
birth DATE NOT NULL date_format='DD/MM/YYYY',
hired DATE NOT NULL date_format='DD/MM/YYYY' flag=36,
agehired INT(3) AS (floor(datediff(hired,birth)/365.25))
virtual,
fn CHAR(100) NOT NULL DEFAULT '' special=FILEID)
ENGINE=CONNECT table_type=FIX file_name='boys.txt' mapped=YES lrecl=47;This page is licensed: GPLv2
For JDBC, using GET_LINES() is preferable, because it's more efficient than individual GET_LINE() calls.
Procedure PUT() - submit a partial line into the buffer.
Procedure NEW_LINE() - terminate a line submitted by PUT().
Procedure GET_LINE() - read one line (the earliest) from the buffer. When a line is read by GET_LINE(), it's automatically removed from the buffer.
Procedure GET_LINES() - read all lines (as an array of strings) from the buffer - this procedure isn't implemented.
line1
0
line3
0
-
1
Boston
1987-06-07
2008-04-01
20
d:\mariadb\sql\data\boys.txt
6
Bill
Boston
1986-09-11
2008-02-10
21
d:\mariadb\sql\data\boys.txt
PARTID
String
The name of the partition this row belongs to. Specific to partitioned tables.
SERVID
String
The name of the federated server or server host used by a MYSQL table. “ODBC” for an ODBC table, "JDBC" for a JDBC table and “Current” for all other tables.
1
John
Boston
1986-01-25
2010-06-02
24
d:\mariadb\sql\data\boys.txt
2
ROWID
Integer
The row ordinal number in the table. This is not quite equivalent to a virtual column with an auto increment of 1 because rows are renumbered when deleting rows.
ROWNUM
Integer
The row ordinal number in the file. This is different from ROWID for multiple tables, TBL/XCOL/OCCUR/PIVOT tables, XML tables with a multiple column, and for DBF tables where ROWNUM includes soft deleted rows.
FILEID FDISK FPATH FNAME FTYPE
String
FILEID returns the full name of the file this row belongs to. Useful in particular for multiple tables represented by several files. The other special columns can be used to retrieve only one part of the full name.
TABID
String
Henry
The name of the table this row belongs to. Useful for TBL tables.
All these can be combined or transformed by further SQL operations. This makes working with CONNECT much more flexible than just using the LOAD statement.
This page is licensed: GPLv2
CREATE TABLE biblio ENGINE=myisam SELECT * FROM xsampall2;A less used concatenating function is CONCAT_WS(). It will put together columns with a separator between each. This can be useful when making data available for other programs. For instance, suppose we have a program that will import data, but it requires the fields to be separated by vertical bars. We could just export the data, or we could use a SELECT statement like the one that follows in conjunction with an interface written with an API language like Perl:
The first element above is the separator. The remaining elements are the columns to be strung together.
If we want to format a long number with commas every three digits and a period for the decimal point (e.g., 100,000.00), we can use the function FORMAT() like so:
In this statement, the CONCAT() will place a dollar sign in front of the numbers found in the col5 column, which are formatted with commas by FORMAT(). The 2 within the FORMAT() stipulates two decimal places.
Occasionally, one will want to convert the text from a column to either all upper-case letters or all lower-case letters. In the example that follows, the output of the first column is converted to upper-case and the second to lower-case:
When displaying data in forms, it's sometimes useful to pad the data displayed with zeros or dots or some other filler. This can be necessary when dealing with VARCHAR columns where the width varies to help the user to see the column limits. There are two functions that may be used for padding: LPAD() and RPAD().
In this SQL statement, dots are added to the right end of each part number. So a part number of "H200" will display as "H200....", but without the quotes. Each part's description will have under-scores preceding it. A part with a description of "brass hinge" will display as "brass hinge".
If a column is a CHAR data-type, a fixed width column, then it may be necessary to trim any leading or trailing spaces from displays. There are a few functions to accomplish this task. The LTRIM() function will eliminate any leading spaces to the left. So "H200" becomes "H200". For columns with trailing spaces, spaces on the right, RTRIM() will work: "H500" becomes "H500". A more versatile trimming function, though, is TRIM(). With it one can trim left, right or both. Below are a few examples:
In the first TRIM() clause, the padding component is specified; the leading dots are to be trimmed from the output of col1. The trailing spaces are trimmed off of col2—space is the default. Both leading and trailing under-scores are trimmed from col3 above. Unless specified, BOTH is the default. So leading and trailing spaces are trimmed from col4 in the statement here.
When there is a need to extract specific elements from a column, MariaDB has a few functions that can help. Suppose a column in the table contacts contains the telephone numbers of sales contacts, including the area-codes, but without any dashes or parentheses. The area-code of each could be extracted for sorting with the LEFT() and the telephone number with the RIGHT() function.
In the LEFT() function above, the column telephone is given along with the number of characters to extract, starting from the first character on the left in the column. The RIGHT() function is similar, but it starts from the last character on the right, counting left to capture, in this statement, the last seven characters. In the SQL statement above, area_code is reused to order the results set. To reformat the telephone number, it are necessary to use the SUBSTRING() function.
In this SQL statement, the CONCAT() function is employed to assemble some characters and extracted data to produce a common display for telephone numbers (e.g., (504) 555-1234). The first element of the CONCAT() is an opening parenthesis. Next, a LEFT() is used to get the first three characters of telephone, the area-code. After that a closing parenthesis, along with a space is added to the output. The next element uses the SUBSTRING() function to extract the telephone number's prefix, starting at the fourth position, for a total of three characters. Then a dash is inserted into the display. Finally, the function MID() extracts the remainder of the telephone number, starting at the seventh position. The functions MID() and SUBSTRING() are interchangeable and their syntax are the same. By default, for both functions, if the number of characters to capture isn't specified, then it's assumed that the remaining ones are to be extracted.
There are a few functions in MariaDB that can help in manipulating text. One such function is REPLACE(). With it every occurrence of a search parameter in a string can be replaced. For example, suppose we wanted to replace the title Mrs. with Ms. in a column containing the person's title, but only in the output. The following SQL would do the trick:
We're using the ever handy CONCAT() function to put together the contact's name with spaces. The REPLACE() function extracts each title and replaces Mrs. with Ms., where applicable. Otherwise, for all other titles, it displays them unchanged.
If we want to insert or replace certain text from a column (but not all of its contents), we could use the INSERT() function in conjunction with the LOCATE() function. For example, suppose another contacts table has the contact's title and full name in one column. To change the occurrences of Mrs. to Ms., we could not use REPLACE() since the title is embedded in this example. Instead, we would do the following:
The first element of the INSERT() function is the column. The second element which contains the LOCATE() is the position in the string that text is to be inserted. The third element is optional; it states the number of characters to overwrite. In this case, Mrs. which is four characters is overwritten with Ms. (the final element), which is only three. Incidentally, if 0 is specified, then nothing is overwritten, text is inserted only. As for the LOCATE() function, the first element is the column and the second the search text. It returns the position within the column where the text is found. If it's not found, then 0 is returned. A value of 0 for the position in the INSERT() function negates it and returns the value of name unchanged.
On the odd chance that there is a need to reverse the content of a column, there's the REVERSE() function. You would just place the column name within the function. Another minor function is the REPEAT() function. With it a string may be repeated in the display:
The first component of the function above is the string or column to be repeated. The second component states the number of times it's to be repeated.
The function CHAR_LENGTH() is used to determine the number of characters in a string. This could be useful in a situation where a column contains different types of information of specific lengths. For instance, suppose a column in a table for a college contains identification numbers for students, faculty, and staff. If student identification numbers have eight characters while others have less, the following will count the number of student records:
The COUNT() function above counts the number of rows that meet the condition of the WHERE clause.
In a SELECT statement, an ORDER BY clause can be used to sort a results set by a specific column. However, if the column contains IP addresses, a simple sort may not produce the desired results:
In the limited results above, the IP address 10.0.2.1 should be second. This happens because the column is being sorted lexically and not numerically. The function INET_ATON() will solve this sorting problem.
Basically, the INET_ATON() function will convert IP addresses to regular numbers for numeric sorting. For instance, if we were to use the function in the list of columns in a SELECT statement, instead of the WHERE clause, the address 10.0.1.1 would return 167772417, 10.0.11.1 will return 167774977, and 10.0.2.1 the number 167772673. As a complement to INET_ATON(), the function INET_NTOA() will translate these numbers back to their original IP addresses.
MariaDB is fairly case insensitive, which usually is fine. However, to be able to check by case, the STRCMP() function can be used. It converts the column examined to a string and makes a comparison to the search parameter.
If there is an exact match, the function STRCMP() returns 0. So if col3 here contains "Text", it won't match. Incidentally, if col3 alphabetically is before the string to which it's compared, a -1 are returned. If it's after it, a 1 is returned.
When you have list of items in one string, the SUBSTRING_INDEX() can be used to pull out a sub-string of data. As an example, suppose we have a column which has five elements, but we want to retrieve just the first two elements. This SQL statement will return them:
The first component in the function above is the column or string to be picked apart. The second component is the delimiter. The third is the number of elements to return, counting from the left. If we want to grab the last two elements, we would use a negative two to instruct MariaDB to count from the right end.
There are more string functions available in MariaDB. A few of the functions mentioned here have aliases or close alternatives. There are also functions for converting between ASCII, binary, hexi-decimal, and octal strings. And there are also string functions related to text encryption and decryption that were not mentioned. However, this article has given you a good collection of common string functions that will assist you in building more powerful and accurate SQL statements.
This page is licensed: CC BY-SA / Gnu FDL
The CONNECT storage engine's shared library is included in MariaDB packages as the ha_connect.so or ha_connect.so shared library on systems where it can be built.
The CONNECT storage engine is included in binary tarballs on Linux.
The CONNECT storage engine can also be installed via a package manager on Linux. In order to do so, your system needs to be configured to install from one of the MariaDB repositories.
You can configure your package manager to install it from MariaDB Corporation's MariaDB Package Repository by using the MariaDB Package Repository setup script.
You can also configure your package manager to install it from MariaDB Foundation's MariaDB Repository by using the MariaDB Repository Configuration Tool.
Installing with yum/dnf
On RHEL, CentOS, Fedora, and other similar Linux distributions, it is highly recommended to install the relevant RPM package from MariaDB's repository using yum or dnf. Starting with RHEL 8 and Fedora 22, yum has been replaced by dnf, which is the next major version of yum. However, yum commands still work on many systems that use dnf:
Installing with apt-get
On Debian, Ubuntu, and other similar Linux distributions, it is highly recommended to install the relevant DEB package from MariaDB's repository using apt-get:
Installing with zypper
On SLES, OpenSUSE, and other similar Linux distributions, it is highly recommended to install the relevant RPM package from MariaDB's repository using zypper:
Once the shared library is in place, the plugin is not actually installed by MariaDB by default. There are two methods that can be used to install the plugin with MariaDB.
The first method can be used to install the plugin without restarting the server. You can install the plugin dynamically by executing INSTALL SONAME or INSTALL PLUGIN:
The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the --plugin-load or the --plugin-load-add options. This can be specified as a command-line argument to mysqld or it can be specified in a relevant server option group in an option file:
You can uninstall the plugin dynamically by executing UNINSTALL SONAME or UNINSTALL PLUGIN:
If you installed the plugin by providing the --plugin-load or the --plugin-load-add options in a relevant server option group in an option file, then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.
The CONNECT storage engine has some external dependencies.
The CONNECT storage engine requires an ODBC library. On Unix-like systems, that usually means installing unixODBC. On some systems, this is installed as the unixODBC package:
On other systems, this is installed as the libodbc1 package:
If you do not have the ODBC library installed, then you may get an error about a missing library when you attempt to install the plugin:
This page is licensed: GPLv2
CALL DBMS_OUTPUT.ENABLE;DROP TABLE t1;
CREATE TABLE t1 (line VARCHAR2(400), status INTEGER);
DECLARE
line VARCHAR2(400);
status INTEGER;
BEGIN
DBMS_OUTPUT.PUT_LINE('line1');
DBMS_OUTPUT.PUT_LINE('line2');
DBMS_OUTPUT.GET_LINE(line, status);
INSERT INTO t1 VALUES (line, status);
DBMS_OUTPUT.PUT_LINE('line3'); -- This cleares the buffer (removes line2) before putting line3
LOOP
DBMS_OUTPUT.GET_LINE(line, status);
INSERT INTO t1 VALUES (line, status);
EXIT WHEN status <> 0;
END LOOP;
END;
/
SELECT * FROM t1;TYPE CHARARR IS TABLE OF VARCHAR2(32767) INDEX BY BINARY_INTEGER;TYPE DBMSOUTPUT_LINESARRAY IS VARRAY(2147483647) OF VARCHAR2(32767);SET sql_mode=ORACLE;
DELIMITER /
DECLARE
all_lines MEDIUMTEXT CHARACTER SET utf8mb4 :='';
line MEDIUMTEXT CHARACTER SET utf8mb4;
status INT;
BEGIN
sys.DBMS_OUTPUT.PUT_LINE('line1');
sys.DBMS_OUTPUT.PUT_LINE('line2');
sys.DBMS_OUTPUT.PUT_LINE('line3');
LOOP
sys.DBMS_OUTPUT.GET_LINE(line, status);
EXIT WHEN status > 0;
all_lines:= all_lines || line || '\n';
END LOOP;
SELECT all_lines;
END;
/
DELIMITER ;SELECT * FROM boys WHERE city = 'boston';SELECT CONCAT(name_first, ' ', name_last)
AS Name
FROM contacts;SELECT CONCAT_WS('|', col1, col2, col3)
FROM table1;SELECT CONCAT('$', FORMAT(col5, 2))
FROM table3;SELECT UCASE(col1),
LCASE(col2)
FROM table4;SELECT RPAD(part_nbr, 8, '.') AS 'Part Nbr.',
LPAD(description, 15, '_') AS Description
FROM catalog;SELECT TRIM(LEADING '.' FROM col1),
TRIM(TRAILING FROM col2),
TRIM(BOTH '_' FROM col3),
TRIM(col4)
FROM table5;SELECT LEFT(telephone, 3) AS area_code,
RIGHT(telephone, 7) AS tel_nbr
FROM contacts
ORDER BY area_code;SELECT CONCAT('(', LEFT(telephone, 3), ') ',
SUBSTRING(telephone, 4, 3), '-',
MID(telephone, 7)) AS 'Telephone Number'
FROM contacts
ORDER BY LEFT(telephone, 3);SELECT CONCAT(REPLACE(title, 'Mrs.', 'Ms.'),
' ', name_first, ' ', name_last) AS Name
FROM contacts;SELECT INSERT(name, LOCATE(name, 'Mrs.'), 4, 'Ms.')
FROM contacts;SELECT REPEAT(col1, 2)
FROM table1;SELECT COUNT(school_id)
AS 'Number of Students'
FROM table8
WHERE CHAR_LENGTH(school_id)=8;SELECT ip_address
FROM computers WHERE server='Y'
ORDER BY ip_address LIMIT 3;
+-------------+
| ip_address |
+-------------+
| 10.0.1.1 |
| 10.0.11.1 |
| 10.0.2.1 |
+-------------+SELECT ip_address
FROM computers WHERE server='Y'
ORDER BY INET_ATON(ip_address) LIMIT 3;SELECT col1, col2
FROM table6
WHERE STRCMP(col3, 'text')=0;SELECT SUBSTRING_INDEX(col4, '|', 2)
FROM table7;sudo yum install MariaDB-connect-enginesudo apt-get install mariadb-plugin-connectsudo zypper install MariaDB-connect-engineINSTALL SONAME 'ha_connect';[mariadb]
...
plugin_load_add = ha_connectUNINSTALL SONAME 'ha_connect';sudo yum install unixODBCsudo apt-get install libodbc1INSTALL SONAME 'ha_connect';
ERROR 1126 (HY000): Can't open shared library '/home/ian/MariaDB_Downloads/10.1.17/lib/plugin/ha_connect.so'
(errno: 2, libodbc.so.1: cannot open shared object file: No such file or directory)localhostThe user name is either your Unix login name, or ODBC on Windows.
No password is sent.
The client will connect to the server with the default socket, but not any particular database on the server.
These defaults can be overridden by specifying a particular parameter to use. For example:
In this case:
-h specifies a host. Instead of using localhost, the IP 166.78.144.191 is used.
-u specifies a user name, in this case username
-p specifies a password, password. Note that for passwords, unlike the other parameters, there cannot be a space between the option (-p) and the value (password). It is also not secure to use a password in this way, as other users on the system can see it as part of the command that has been run. If you include the -p option, but leave out the password, you are prompted for it, which is more secure.
The database name is provided as the first argument after all the options, in this case database_name.
It will connect with the default tcp_ip port, 3306
Connect to the MariaDB server on the given host. The default host is localhost. By default, MariaDB does not permit remote logins - see Configuring MariaDB for Remote Client Access.
The password of the MariaDB account. It is generally not secure to enter the password on the command line, as other users on the system can see it as part of the command that has been run. If you include the -p or --password option, but leave out the password, you are prompted for it, which is more secure.
On Windows systems that have been started with the --enable-named-pipe option, use this option to connect to the server using a named pipe.
The TCP/IP port number to use for the connection. The default is 3306.
Specifies the protocol to be used for the connection for the connection. It can be one of TCP, SOCKET, PIPE or MEMORY (case-insensitive). Usually you would not want to change this from the default. For example on Unix, a Unix socket file (SOCKET) is the default protocol, and usually results in the quickest connection.
TCP: A TCP/IP connection to a server (either local or remote). Available on all operating systems.
SOCKET: A Unix socket file connection, available to the local server on Unix systems only. If socket is not specified with --socket, in a config file or with the environment variable MYSQL_UNIX_PORT then the default /tmp/mysql.sock are used.
PIPE. A named-pipe connection (either local or remote). Available on Windows only.
MEMORY. Shared-memory connection to the local server on Windows systems only.
Only available on Windows systems in which the server has been started with the --shared-memory option, this specifies the shared-memory name to use for connecting to a local server. The value is case-sensitive, and defaults to MARIADB.
For connections to localhost, this specifies either the Unix socket file to use (default /tmp/mysql.sock), or, on Windows where the server has been started with the --enable-named-pipe option, the name (case-insensitive) of the named pipe to use (default MARIADB).
A brief listing is provided below. See Secure Connections Overview and TLS System Variables for more detail.
Enable TLS for connection (automatically enabled with other TLS flags). Disable with '--skip-ssl'
CA file in PEM format (check OpenSSL docs, implies --ssl).
CA directory (check OpenSSL docs, implies --ssl).
X509 cert in PEM format (implies --ssl).
TLS cipher to use (implies --ssl).
X509 key in PEM format (implies --ssl).
Certificate revocation list (implies --ssl).
Certificate revocation list path (implies --ssl).
Verify server's "Common Name" in its cert against hostname used when connecting. This option is disabled by default.
The MariaDB user name to use when connecting to the server. The default is either your Unix login name, or ODBC on Windows. See the GRANT command for details on creating MariaDB user accounts.
It's also possible to use option files (or configuration files) to set these options. Most clients read option files. Usually, starting a client with the --help option will display which files it looks for as well as which option groups it recognizes.
allows you to start MariaDB without GRANT. This is useful if you lost your root password.
This page is licensed: CC BY-SA / Gnu FDL
0.0.0.0Multiple comma-separated addresses can be given to bind_address to allow the server to listen on more than one specific interface while not listening on others.
If bind-address is bound to 127.0.0.1 (localhost), one can't connect to the MariaDB server from other hosts or from the same host over TCP/IP on a different interface than the loopback (127.0.0.1). This for example will not work (connecting with a hostname that points to a local IP of the host):
(/my/maria-10.11) ./client/mariadb --host=myhost --protocol=tcp --port=3306 test
ERROR 2002 (HY000): Can't connect to MySQL server on 'myhost' (115)
(/my/maria-10.11) telnet myhost 3306
Trying 192.168.0.11...
telnet: connect to address 192.168.0.11: Connection refusedUsing 'localhost' works when binding with bind_address:
(my/maria-10.11) ./client/mariadb --host=localhost --protocol=tcp --port=3306 test
Reading table information for completion of table and column names
You can turn
Multiple comma-separated addresses cannot be given to bind_address . Use a single address.
To enable MariaDB to listen to remote connections, you need to edit your defaults file. See Configuring MariaDB with my.cnf for more detail.
Common locations for defaults files:
You can see which defaults files are read and in which order by executing:
The last line shows which defaults files are read.
Once you have located the defaults file, use a text editor to open the file and try to find lines like this under the [mysqld] section:
The lines may not be in this particular order, but the order doesn't matter.
If you are able to locate these lines, make sure they are both commented out (prefaced with hash (#) characters), so that they look like this:
Again, the order of these lines don't matter.
Alternatively, just add the following lines at the end of your .my.cnf (notice that the file name starts with a dot) file in your home directory or alternative last in your /etc/my.cnf file.
This works as one can have any number of [mysqld] sections.
Save the file and restart the mariadbd daemon or service (see Starting and Stopping MariaDB).
You can check the options mariadbd is using by executing:
It doesn't matter if you have the original --bind-address left as the later --skip-bind-address will overwrite it.
Now that your MariaDB server installation is setup to accept connections from remote hosts, we have to add a user that is allowed to connect from something other than 'localhost' (Users in MariaDB are defined as 'user'@'host', so 'chadmaynard'@'localhost' and 'chadmaynard'@'1.1.1.1' (or 'chadmaynard'@'server.domain.local') are different users that can have different permissions and/or passwords.
To create a new user:
Log into the mariadb command line client (or your favorite graphical client if you wish):
if you are interested in viewing any existing remote users, issue the following SQL statement on the mysql.user table:
(If you have a fresh install, it is normal for no rows to be returned)
Now you have some decisions to make. At the heart of every grant statement you have these things:
list of allowed privileges
what database/tables these privileges apply to
username
host this user can connect from
and optionally a password
It is common for people to want to create a "root" user that can connect from anywhere, so as an example, we'll do just that, but to improve on it we'll create a root user that can connect from anywhere on my local area network (LAN), which has addresses in the subnet 192.168.100.0/24. This is an improvement because opening a MariaDB server up to the Internet and granting access to all hosts is bad practice.
% is a wildcard.
For more information about how to use GRANT, please see the GRANT page.
At this point, we have accomplished our goal and we have a user 'root' that can connect from anywhere on the 192.168.100.0/24 LAN.
One more point to consider whether the firewall is configured to allow incoming request from remote clients:
On RHEL and CentOS 7, it may be necessary to configure the firewall to allow TCP access to MariaDB from remote hosts. To do so, execute both of these commands:
If your system is running a software firewall (or behind a hardware firewall or NAT) you must allow connections destined to TCP port that MariaDB runs on (by default and almost always 3306).
To undo this change and not allow remote access anymore, simply remove the skip-bind-address line or uncomment the bind-address line in your defaults file. The end result should be that you should have in the output from ./sql/mariadbd --print-defaults the option --bind-address=127.0.0.1 and no --skip-bind-address.
The initial version of this article was copied, with permission, from Remote_Clients_Cannot_Connect on 2012-10-30.
This page is licensed: CC BY-SA / Gnu FDL
The following storage engines support export without FLUSH TABLES ... FOR EXPORT, assuming the source server is down and the receiving server is not accessing the files during the copy.
Requires clean shutdown. Table will automatically be fixed on the receiving server if aria_chk --zerofill was not run. If aria_chk --zerofill is run, then the table is immediately usable without any delays
.MRG files can be copied even while server is running as the file only contains a list of tables that are part of merge.
For all of the above storage engines (Archive, Aria, CSV, MyISAM and MERGE), one can copy tables even from a live server under the following circumstances:
You have done a FLUSH TABLES or FLUSH TABLE table_name for the specific table.
The server is not accessing the tables during the copy process.
The advantage of FLUSH TABLES table_name FOR EXPORT is that the table is read locked until UNLOCK TABLES is executed.
Warning: If you do the above live copy, you are doing this on your own risk as if you do something wrong, the copied table is very likely to be corrupted. The original table will of course be fine.
If you want to give a user access to some data in a table for the user to use in their MariaDB server, you can do the following:
First let's create the table we want to export. To speed up things, we
create this without any indexes. We use TRANSACTIONAL=0 ROW_FORMAT=DYNAMIC for Aria to use the smallest possible row format.
Then we pack it and generate the indexes. We use a big sort buffer to speed up generating the index.
The procedure for MyISAM tables is identical, except that myisamchk doesn't have the --ignore-control-file option.
InnoDB's file-per-table tablespaces are transportable, which means that you can copy a file-per-table tablespace from one MariaDB Server to another server. See Copying Transportable Tablespaces for more information.
Tables that use most storage engines are immediately usable when their files are copied to the new datadir.
However, this is not true for tables that use InnoDB. InnoDB tables have to be imported with ALTER TABLE ... IMPORT TABLESPACE. See Copying Transportable Tablespaces for more information.