MariaDB starting with 10.1

InnoDB/XtraDB Page Compression was added in MariaDB 10.1.0 and support for the snappy compression algorithm in MariaDB 10.1.3


Page compression is an alternative way to compress your tables which is different (but similar) to the InnoDB COMPRESSED storage format. In page compression, only uncompressed pages are stored in the buffer pool. This approach differs significantly from legacy InnoDB compressed tables using row_format=compressed, where both uncompressed and compressed pages can be in the buffer pool. In page compression, pages are actually compressed just before being written to the filespace. Similarly, pages are uncompressed when they are read from tablespace before being placed in the buffer pool. Additionally, page compression supports different compression algorithms, not only zlib. For the moment, the only engines that fully support page compression are XtraDB and InnoDB. Page compression is available only if innodb-file-format=Barracuda and innodb-file-per-table=1.

Page compression can be used on any file system but is most beneficial on SSDs and Non-Volatile Memory (NVM) devices like FusionIO atomic-series Page compression design also works with double-write enabled, but best performance is reached if double-write is disabled (i.e. innodb-doublewrite=0) and atomic writes are enabled (innodb-use-atomic-writes=1). This naturally requires that the used file system and storage device supports atomic writes (

Choosing compression algorithm

You specify which compression algorithm to use with the innodb-compression-algorithm startup option for MariaDB. The options are:

noneDefault. Data is not compressed.
zlibPages are compressed with bundled zlib compression method.
lz4Pages are compressed using compression method.
lzoPages are compressed using compression method.
lzmaPages are compressed using compression method.
bzip2Pages are compressed using compression method.
snappyPages are compressed using

Because all of these compression methods are not available by default on all distributions and MariaDB server does not bundle them, you may need to download the desired compression method package from the above links, install the package and finally recompile MariaDB server from the source distribution with:

cmake .
make install

After the first command above, please check that cmake has found the desired compression method from your system.

The compression method can be changed whenever needed. Currently the compression method is global (i.e. you can't specify compression method/table).

set global innodb_compression_algorithm=lz4;

From this point on page compressed tables will use the lz4 compression method. This setting does not change already compressed pages that were compressed with a different compression method. This is because MariaDB supports pages that are uncompressed, compressed with e.g. lzo and compressed with e.g. lz4 in same tablespace. This is possible because every page in InnoDB tablespace contains compression method information on page header metadata.

Choosing compression level

You specify the default compression level to use with the [[xtradbinnodb-server-system-variables#innodb_compression_level|innodb-compression-level]# startup option for MariaDB. Values are 0-9, default is 6. Note that not all compression methods allow choosing the compression level and in those cases the compression level value is ignored.

Creating compressed tables

By default, only tables that are specified to be compressed are actually compressed,. You can create a page compressed table by specifying PAGE_COMPRESSED=1 in the CREATE TABLE statement, for example:

CREATE TABLE users(user_id int not null, b varchar(200), primary key(user_id)) 

The innodb_compression_default system variable allows you to specify whether or not new InnoDB tables are compressed by default. It is off by default (no compression).

Persistent TRIM

Normally InnoDB always writes the full page i.e. by default 16K (innodb-page-size). However, when compression is used you may use a different approach. On file systems that support fallocate() and creating sparse files (e.g. ext3/4, xfs, nvmfs, etc) by using FALLOC_FL_PUNCH_HOLE, InnoDB will only write the actual compressed page size aligned to sector size. The rest of the page is trimmed using fallocate(file_handle, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, file_offset, remainder_len);. This is needed because InnoDB always reads the full page size (default 16K).

This feature can be configured by using innodb_use_trim and innodb_use_fallocate:


Note that innodb_use_trim favors NVMFS (see InnoDB holepunch compression vs the filesystem in MariaDB 10.1).

Multi-threaded flush

Normally MariaDB uses a single thread to flush dirty pages out of the buffer pool. However, when compression is used and the system has multiple-cores/CPUs and a fast I/O device (like an NVM device), you may benefit by using the multi-threaded flush mechanism. This feature has two configuration variables at startup time:

--innodb-use-mtflush= startup option for MariaDB to enable multi-threaded flush. By default this is 8. You may configure the number of used threads by --innodb-mtflush-threads. The maximum is currently 64. In multi-core systems a value similar to innodb-buffer-pool-instances that is close to the number of cores have shown to be effective. Naturally, you should benchmark the suitable value using your own application as a workload.

Monitoring compression

SHOW STATUS contains new status variables that can be used to monitor compression

Status variable nameValuesDescription
Innodb_page_compression_saved0Bytes saved by compression
Innodb_page_compression_trim_sect5120Number of 512 sectors trimmed
Innodb_page_compression_trim_sect10240Number of 1024 sectors trimmed
Innodb_page_compression_trim_sect20480Number of 2048 sectors trimmed
Innodb_page_compression_trim_sect40960Number of 4096 sectors trimmed
Innodb_page_compression_trim_sect81920Number of 8192 sectors trimmed
Innodb_page_compression_trim_sect163840Number of 16384 sectors trimmed
Innodb_page_compression_trim_sect327680Number of 32768 sectors trimmed
Innodb_num_pages_page_compressed0Number of pages compressed
Innodb_num_page_compressed_trim_op0Number of trim operations
Innodb_num_page_compressed_trim_op_saved0Number of trim operations saved
Innodb_num_pages_page_decompressed0Number of pages decompressed
Innodb_num_pages_page_compression_error0Number of compression errors
Innodb_have_lz4ONDoes system have lz4 compression method available
Innodb_have_lzoONDoes system have lzo compression method available
Innodb_have_lzmaONDoes system have lzma compression method available
Innodb_have_bzip2ONDoes system have bzip2 compression method available
Innodb_have_snappyONDoes system have snappy compression method available

Keep in mind that page compression is performed when InnoDB pages are flushed to disk, so if you are monitoring page compression via these status variables, the status variables values will get incremented when the pages are flushed, which does not necessarily happen immediately. Here's an example:

     `id` int(11) NOT NULL,
     `str` varchar(50) DEFAULT NULL,
     PRIMARY KEY (`id`)
INSERT INTO tab VALUES (1, 'str1');

SHOW GLOBAL STATUS LIKE 'Innodb_num_pages_page_compressed';
| Variable_name                    | Value |
| Innodb_num_pages_page_compressed | 0     |
SET GLOBAL innodb_compression_algorithm=zlib;

SHOW GLOBAL STATUS LIKE 'Innodb_num_pages_page_compressed';
| Variable_name                    | Value |
| Innodb_num_pages_page_compressed | 0     |
| SLEEP(10) |
|         0 |
SHOW GLOBAL STATUS LIKE 'Innodb_num_pages_page_compressed';
| Variable_name                    | Value |
| Innodb_num_pages_page_compressed | 3     |

Note also that the written *.ibd files are sparse files, so in order to see the effects of the compression, ls will not be sufficient, and ls -s or dd should be used to view the file sizes instead.


Note that Percona XtraBackup (as of 2.4) will not work with MariaDB 10.1 compression. However, MariaDB's fork, MariaDB Backup, will work with compression.


  • Table compression was developed with cooperation by Fusion-io and especially Dhananjoy Das and Torben Mathiasen.

See also


Comments loading...