Dive into advanced configurations for MariaDB Server performance. This section covers in-depth tuning parameters, optimization strategies, and best practices to maximize speed and efficiency.
Articles of how to setup your MariaDB optimally on different systems
Explains the concept of atomic writes in MariaDB, which improve performance and data integrity on SSDs by bypassing the InnoDB doublewrite buffer, supported on devices like Fusion-io and Shannon SSDs.
When Innodb writes to the filesystem, there is generally no guarantee that a given write operation will be complete (not partial) in cases of a poweroff event, or if the operating system crashes at the exact moment a write is being done.
Without detection or prevention of partial writes, the integrity of the database can be compromised after recovery.
innodb_doublewrite--an Imperfect SolutionSince its inception, Innodb has had a mechanism to detect and ignore partial writes via the (also innodb_checksum can be used to detect a partial write).
Doublewrites, controlled by the system variable, comes with its own set of problems. Especially on SSD, writing each page twice can have detrimental effects (write leveling).
innodb_doublewriteA better solution is to directly ask the filesystem to provide an atomic (all or nothing) write guarantee. Currently this is only available on .
When starting, and beyond automatically detects if any of the supported SSD cards are used.
When opening an InnoDB table, there is a check if the tablespace for the table is and if yes, it will automatically enable atomic writes for the table. If atomic writes support is not detected, the doublewrite buffer will be used.
One can disable atomic write support for all cards by setting the variable to OFF in your my.cnf file. It's ON by default.
To use atomic writes instead of the doublewrite buffer, add:
to the my.cnf config file.
Note that atomic writes are only supported on in these versions of MariaDB.
The following happens when atomic writes are enabled
if is neither O_DIRECT, ALL_O_DIRECT, or O_DIRECT_NO_FSYNC, it is switched to O_DIRECT
is switched ON (files are extended using posix_fallocate rather than writing zeros behind the end of file)
Here is a flowchart showing how atomic writes work inside InnoDB:
MariaDB currently supports atomic writes on the following devices:
. and above.
. and above.
This page is licensed: CC BY-SA / Gnu FDL
Recommendations for setting the Linux `vm.swappiness` kernel parameter (ideally to 1) to prevent the OS from swapping out MariaDB memory pages, which degrades performance.
Obviously, accessing swap memory from disk is far slower than accessing RAM directly. This is particularly bad on a database server because:
MariaDB's internal algorithms assume that memory is not swap, and are highly inefficient if it is. Some algorithms are intended to avoid or delay disk IO, and use memory where possible - performing this with swap can be worse than just doing it on disk in the first place.
Whenever an Innodb datafile is opened, a special ioctl() is issued to switch on atomic writes. If the call fails, an error is logged and returned to the caller. This means that if the system tablespace is not located on an atomic write capable device or filesystem, InnoDB/XtraDB will refuse to start.
if innodb_doublewrite is set to ON, innodb_doublewrite will be switched OFF and a message written to the error log.

Swap increases IO over just using disk in the first place as pages are actively swapped in and out of swap. Even something like removing a dirty page that is no longer going to be stored in memory, while designed to improve efficiency, will under a swap situation cost more IO.
Database locks are particularly inefficient in swap. They are designed to be obtained and released often and quickly, and pausing to perform disk IO will have a serious impact on their usability.
The main way to avoid swapping is to make sure you have enough RAM for all processes that need to run on the machine. Setting the system variables too high can mean that under load the server runs short of memory, and needs to use swap. So understanding what settings to use and how these impact your server's memory usage is critical.
Linux has a swappiness setting which determines the balance between swapping out pages (chunks of memory) from RAM to a preconfigured swap space on the hard drive.
The setting is from 0 to 100, with lower values meaning a lower likelihood of swapping. The default is usually 60 --you can check this by running:
The default setting encourages the server to use swap. Since there probably won't be much else on the database server besides MariaDB processes to put into swap, you'll probably want to reduce this to zero to avoid swapping as much as possible. You can change the default by adding a line to the sysctl.conf file (usually found in /etc/sysctl.conf).
To set the swappiness to zero, add the line:
This normally takes effect after a reboot, but you can change the value without rebooting as follows:
Since RHEL 6.4, setting swappiness=0 more aggressively avoids swapping out, which increases the risk of OOM killing under strong memory and I/O pressure.
A low swappiness setting is recommended for database workloads. For MariaDB databases, it is recommended to set swappiness to a value of 1.
While some disable swap altogether, and you certainly want to avoid any database processes from using it, it can be prudent to leave some swap space to at least allow the kernel to fall over gracefully should a spike occur. Having emergency swap available at least allows you some scope to kill any runaway processes.
This page is licensed: CC BY-SA / Gnu FDL
innodb_use_atomic_writes = 1sysctl vm.swappinessvm.swappiness = 0sysctl -w vm.swappiness=0vm.swappiness = 1Guidance on tuning Linux kernel settings for MariaDB performance, including I/O schedulers (using `none` or `mq-deadline`), open file limits, and core file sizes.
For optimal IO performance running a database on modern hardware we recommend using the none (previously called noop) scheduler.
Recommended schedulers are none, for SSDs and NVMes, and mq-deadline (previously called deadline) for hard disks.
You can check your scheduler setting with:
For instance, it should look like this output:
Older kernels may look like:
Writing the new scheduler name to the same /sys node will change the scheduler:
The impact of schedulers depend significantly on workload and hardware. You can measure the IO-latency using the bcc-tools script with an aim to keep the mean as low as possible.
By default, the system limits how many open file descriptors a process can have open at one time. It has both a soft and hard limit. On many systems, both the soft and hard limit default to 1024. On an active database server, it is very easy to exceed 1024 open file descriptors. Therefore, you may need to increase the soft and hard limits. There are a few ways to do so.
If you are using to start mysqld, then see the instructions at .
If you are using to start mysqld, then see the instructions at .
Otherwise, you can set the soft and hard limits for the mysql user account by adding the following lines to :
After the system is rebooted, the mysql user should use the new limits, and the user's ulimit output should look like the following:
By default, the system limits the size of core files that could be created. It has both a soft and hard limit. On many systems, the soft limit defaults to 0. If you want to , then you may need to increase this. Therefore, you may need to increase the soft and hard limits. There are a few ways to do so.
If you are using to start mysqld, then see the instructions at .
If you are using to start mysqld, then see the instructions at .
Otherwise, you can set the soft and hard limits for the mysql user account by adding the following lines to :
After the system is rebooted, the mysql user should use the new limits, and the user's ulimit output should look like the following:
See .
This page is licensed: CC BY-SA / Gnu FDL
cat /sys/block/${DEVICE}/queue/schedulercat /sys/block/vdb/queue/scheduler
[none] mq-deadline kyber bfqcat /sys/block/sda/queue/scheduler
[noop] deadline cfqecho noop > /sys/block/vdb/queue/schedulermysql soft nofile 65535
mysql hard nofile 65535$ ulimit -Sn
65535
$ ulimit -Hn
65535mysql soft core unlimited
mysql hard core unlimited$ ulimit -Sc
unlimited
$ ulimit -Hc
unlimitedBest practices for configuring MariaDB server variables like `innodb_buffer_pool_size`, `aria_pagecache_buffer_size`, and `thread_handling` to maximize resource utilization on dedicated servers.
This article will help you configure MariaDB for optimal performance.
By default, MariaDB is configured to work on a desktop system and therefore use relatively few resources. To optimize installation for a dedicated server, you have to do a few minutes of work.
For this article we assume that you are going to run MariaDB on a dedicated server.
Feel free to update this article if you have more ideas.
MariaDB is normally configured by editing the my.cnf file. In the next section you have a list of variables that you may want to configure for dedicated MariaDB servers.
InnoDB is normally the default storage engine with MariaDB.
You should set to about 80% of your memory. The goal is to ensure that 80 % of your working set is in memory.
The other most important InnoDB variables are:
Some other important InnoDB variables:
. Deprecated and ignored from .
. Deprecated and ignored from .
MariaDB uses by default the Aria storage engine for internal temporary files. If you have many temporary files, you should set to a reasonably large value so that temporary overflow data is not flushed to disk. The default is 128M.
You can check if Aria is configured properly by executing:
If Aria_pagecache_reads is much smaller than Aria_pagecache_read_request andAria_pagecache_writes is much smaller than Aria_pagecache_write_request#, then your setup is good. If the is big enough, the two variables should be 0, like above.
If you don't use MyISAM tables explicitly (true for most + users), you can set to a very low value, like 64K.
Using memory tables for internal temporary results can speed up execution. However, if the memory table gets full, then the memory table will be moved to disk, which can hurt performance.
You can check how the internal memory tables are performing by executing:
Created_tmp_tables is the total number of internal temporary tables created as part of executing queries like SELECT.Created_tmp_disk_tables shows how many of these did hit the storage.
You can increase the storage for internal temporary tables by setting and high enough. These values are per connection.
If you are doing a lot of fast connections / disconnects, you should increase and if you are running or below .
If you have a lot (> 128) of simultaneous running fast queries, you should consider setting to pool_of_threads.
If you are connecting from a lot of different machines you should increase to the max number of machines (default 128) to cache the resolving of hostnames. If you don't connect from a lot of machines, you can set this to a very low value!
This page is licensed: CC BY-SA / Gnu FDL
Fusion-io develops PCIe based NAND flash memory cards and related software that can be used to speed up MariaDB databases.
The ioDrive branded products can be used as block devices (super-fast disks) or to extend basic DRAM memory. ioDrive is deployed by installing it on an x86 server and then installing the card driver under the operating system. All main line 64-bit operating systems and hypervisors are supported: RHEL, CentOS, SuSe, Debian, OEL etc. and VMware, Microsoft Windows/Server etc. Drivers and their features are constantly developed further.
ioDrive cards support software RAID and you can combine two or more physical cards into one logical drive. Through ioMemory SDK and its APIs, one can integrate and enable more thorough interworking between your own software and the cards - and cut latency.
The key differentiator between a Fusion-io and a legacy SSD/HDD is the following: A Fusion-io card is connected directly on the system bus (PCIe), this enables high data transfer throughput (1.5 GB/s, 3.0 GB/s or 6GB/s) and the fast direct memory access (DMA) method can be used to transfer data. The ATA/SATA protocol stack is omitted and therefore latency is cut short. Fusion-io performance is dependent on server speed: the faster processors and the newer PCIe-bus version you have, the better is the ioDrive performance. Fusion-io memory is non-volatile, in other words, data remains on the card even when the server is powered off.
Introduction to using Fusion-io flash memory cards with MariaDB to significantly boost I/O throughput and reduce latency, including benefits like atomic write support.
innodb_thread_concurrency. Deprecated and ignored from .
Slow query log is used to find queries that are running slow.
OPTIMIZE TABLE helps you defragment tables.
You can start by using ioDrive for database files that need heavy random access.
Whole database on ioDrive.
In some cases, Fusion-io devices allow for atomic writes, which allows the server to safely disable the doublewrite buffer.
Use ioDrive as a write-through read cache. This is possible on server level with Fusion-io directCache software or in VMware environments using ioTurbine software or the ioCache bundle product. Reads happen from ioDrive and all writes go directly to your SAN or disk.
Highly Available shared storage with ION. Have two different hosts, Fusion-io cards in them and share/replicate data with Fusion-io's ION software.
The luxurious Platinum setup: running on Fusion-io SLC cards on several hosts.
Starting with , MariaDB Server supports atomic writes on Fusion-io devices that use the NVMFS (formerly called DirectFS) file system. Unfortunately, NVMFS was never offered under ‘General Availability’, and SanDisk declared that NVMFS would reach end-of-life in December 2015. Therefore, NVMFS support is no longer offered by SanDisk.
MariaDB Server does not currently support atomic writes on Fusion-io devices with any other file systems.
See atomic write support for more information about MariaDB Server's atomic write support.
Extend InnoDB disk cache to be stored on Fusion-io acting as extended memory.
Fusion-io memory can be formatted with different sector size of either 512 or 4096 bytes. Bigger sectors are expected to be faster, but only if I/O is done in blocks of 4KB or multiples of that. Speaking of MariaDB: if only InnoDB data files are stored in Fusion-io memory, all I/O is done in blocks of 16K and thus 4K sector size can be used. If the InnoDB redo log (I/O block size: 512 bytes) goes to the same Fusion-io memory, then short sectors should be used.
Note: XtraDB has the experimental feature of an increased InnoDB log block size of 4K. If this is enabled, then both redo log I/O and page I/O in InnoDB will match a sector size of 4K.
As of file systems: currently XFS is expected to yield the best performance with MariaDB. However depending on the exact kernel version and version of XFS code in use, one might be affected by a bug that severely limits XFS performance in concurrent environments. This has been fixed in kernel versions above 3.5 or RHEL6 kernels kernel-2.6.32-358 or later (because of bug 807503 being fixed).
For the pitbull machine where I have run such tests, ext4 was faster than xfs for 32 or more threads:
up to 8 threads xfs was few percent faster (10% on average).
at 16 threads it was a draw (2036 tps vs. 2070 tps).
at 32 threads ext4 was 28% faster (2345 tps vs. 1829 tps).
at 64 threads ext4 was even 47% faster (2362 tps vs. 1601 tps).
at higher concurrency ext4 lost its bite, but was still constantly better than xfs.
Those numbers are for spinning disks. I guess for Fusion-io memory the XFS numbers will be even worse.
There are several card models. ioDrive is older generation, ioDrive2 is newer. SLC sustains more writes. MLC is good enough for normal use.
ioDrive2, capacities per card 365GB, 785GB, 1.2TB with MLC. 400GB and 600GB with SLC, performance up to 535000 IOPS & 1.5GB/s bandwidth
ioDrive2 Duo, capacities per card 2.4TB MLC and 1.2TB SLC, performance up to 935000 IOPS & 3.0GB/s bandwidth
ioDrive, capacities per card 320GB, 640GB MLC and 160GB, 320GB SLC, performance up to 145000 IOPS & 790MB/s bandwidth
ioDrive Duo, capacities per card 640GB, 1.28TB MLC and 320GB, 640GB SLC, performance up to 285000 IOPS & 1.5GB/s bandwidth
ioDrive Octal, capacities per card 5TB and 10TB MLC, performance up to 1350000 IOPS & 6.7GB/s bandwidth
ioFX, a 420GB QDP MLC workstation product, 1.4GB/s bandwidth
ioCache, a 600GB MLC card with ioTurbine software bundle that can be used to speed up VMware based virtual hosts.
ioScale, 3.2TB card, building block to enable all-flash data center build out in hyperscale web and cloud environments. Product has been developed in co-operation with Facebook.
directCache - transforms ioDrive to work as a read cache in your server. Writes go directly to your SAN
ioTurbine - read cache software for VMware
ION - transforms ioDrive into a shareable storage
ioSphere - software to manage and monitor several ioDrives
This page is licensed: CC BY-SA / Gnu FDL
MariaDB [test]> show global status like "Aria%";+-----------------------------------+-------+
| Variable_name | Value |
+-----------------------------------+-------+
| Aria_pagecache_blocks_not_flushed | 0 |
| Aria_pagecache_blocks_unused | 964 |
| Aria_pagecache_blocks_used | 232 |
| Aria_pagecache_read_requests | 9598 |
| Aria_pagecache_reads | 0 |
| Aria_pagecache_write_requests | 222 |
| Aria_pagecache_writes | 0 |
| Aria_transaction_log_syncs | 0 |
+-----------------------------------+-------+MariaDB [test]> show global status like "Created%tables%";+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 1 |
| Created_tmp_tables | 2 |
+-------------------------+-------+