This article describes different techniques for inserting data quickly into MariaDB.

Background

When inserting new data into MariaDB, the things that take time are: (in order of importance):

  • Syncing data to disk (as part of the end of transactions)
  • Adding new keys. The larger the index, the more time it takes to keep keys updated.
  • Checking against foreign keys (if they exist).
  • Adding rows to the storage engine.
  • Sending data to the server.

The following describes the different techniques (again, in order of importance) you can use to quickly insert data into a table.

Disabling keys

You can temporarily disable updating of non unique indexes. This is mostly useful when there are zero (or very few) rows in the table into which you are inserting data.

ALTER TABLE table_name DISABLE KEYS;
BEGIN;
... inserting data with INSERT or LOAD DATA ....
COMMIT;
ALTER TABLE table_name ENABLE KEYS;

In many storage engines (at least MyISAM and Aria), ENABLE KEYS works by scanning through the row data and collecting keys, sorting them, and then creating the index blocks. This is an order of magnitude faster than creating the index one row at a time and it also uses less key buffer memory.

Note: When you insert into an empty table with INSERT or LOAD DATA, MariaDB automatically does a DISABLE KEYS before and an ENABLE KEYS afterwards.

When inserting big amounts of data, integrity checks are sensibly time-consuming. It is possible to disable the UNIQUE indexes and the foreign keys checks using the unique_checks and the foreign_key_checks system variables:

SET @@session.unique_checks = 0;
SET @@session.foreign_key_checks = 0;

For XtraDB/InnoDB tables, the AUTO_INCREMENT lock mode can be temporarly set to 2, which is the fastest setting:

SET @@global.innodb_autoinc_lock_mode = 2;

Also, if the table has INSERT triggers or PERSISTENT columns, you may want to drop them, insert all data, and recreate them.

Loading text files

The fastest way to insert data into MariaDB is through the LOAD DATA INFILE command.

The simplest form of the command is:

LOAD DATA INFILE 'file_name' INTO TABLE table_name;

You can also read a file locally on the machine where the client is running by using:

LOAD DATA LOCAL INFILE 'file_name' INTO TABLE table_name;

This is not as fast as reading the file on the server side, but the difference is not that big.

LOAD DATA INFILE is very fast because:

  1. there is no parsing of SQL.
  2. data is read in big blocks.
  3. if the table is empty at the beginning of the operation, all non unique indexes are disabled during the operation.
  4. the engine is told to cache rows first and then insert them in big blocks (At least MyISAM and Aria support this).
  5. for empty tables, some transactional engines (like Aria) do not log the inserted data in the transaction log because one can rollback the operation by just doing a TRUNCATE on the table.

Because of the above speed advantages there are many cases, when you need to insert many rows at a time, where it may be faster to create a file locally, add the rows there, and then use LOAD DATA INFILE to load them; compared to using INSERT to insert the rows.

In MariaDB 5.3 you will also get progress reporting for LOAD DATA INFILE.

mysqlimport

You can import many files in parallel with mysqlimport. For example:

mysqlimport --use-threads=10 database text-file-name [text-file-name...]

Internally mysqlimport uses LOAD DATA INFILE to read in the data.

Inserting data with INSERT statements

Using big transactions

When doing many inserts in a row, you should wrap them with BEGIN / END to avoid doing a full transaction (which includes a disk sync) for every row. For example, doing a begin/end every 1000 inserts will speed up your inserts by almost 1000 times.

BEGIN;
INSERT ...
INSERT ...
END;
BEGIN;
INSERT ...
INSERT ...
END;
...

The reason why you may want to have many BEGIN/END statements instead of just one is that the former will use up less transaction log space.

Multi-value inserts

You can insert many rows at once with multi-value row inserts:

INSERT INTO table_name values(1,"row 1"),(2, "row 2"),...;

The limit for how much data you can have in one statement is controlled by the max_allowed_packet server variable.

Inserting data into several tables at once

If you need to insert data into several tables at once, the best way to do so is to enable multi-row statements and send many inserts to the server at once:

INSERT INTO table_name_1 (auto_increment_key, data) VALUES (NULL,"row 1");
INSERT INTO table_name_2 (auto_increment, reference, data) values (NULL, LAST_INSERT_ID(), "row 2");

LAST_INSERT_ID() is a function that returns the last auto_increment value inserted.

By default, the command line mysql client will send the above as multiple statements.

To test this in the mysql client you have to do:

delimiter ;;
select 1; select 2;;
delimiter ;

Note: for multi-query statements to work, your client must specify the CLIENT_MULTI_STATEMENTS flag to mysql_real_connect().

Server variables that can be used to tune insert speed

OptionDescription
innodb_buffer_pool_sizeIncrease this if you have many indexes in InnoDB/XtraDB tables
key_buffer_sizeIncrease this if you have many indexes in MyISAM tables
max_allowed_packetIncrease this to allow bigger multi-insert statements
read_buffer_sizeRead block size when reading a file with LOAD DATA

See mysqld options for the full list of server variables.

Comments

Comments loading...