Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Learn about the MyRocks storage engine in MariaDB Server. Discover its advantages for flash storage, high write throughput, and compression efficiency in modern database deployments.
Learn how to build the MyRocks storage engine from source within the MariaDB Server build process, including necessary dependencies and configuration options.
This page describes how to get MyRocks in MariaDB when compiling MariaDB from source. (See Build-Steps for instructions how to build the upstream.)
MariaDB compile process will compile MyRocks into ha_rocksdb.so by default if the platform supports it (That is, no WITH_ROCKSDB switch is necessary).
Platform requirements:
A 64-bit platform (due to some 32 bit compilers having difficulties with RocksDB)
git installed (or git submodules fetched somehow)
A sufficiently recent compiler:
gcc >= 4.8, or
clang >= 3.3, or
MS Visual Studio 2015 or newer
The steps were checked on a fresh install of Ubuntu 16.04.2 LTS Xenial.
This should produce storage/rocksdb/ha_rocksdb.so which is MyRocks storage engine in the loadable form.
MyRocks does not require any special way to initialize the data directory. Minimal my.cnf flle:
Run the server like this
Compression libraries. Supported compression libraries are listed in . Compiling like the above, I get:
This page is licensed: CC BY-SA / Gnu FDL
sudo apt-get update
sudo apt-get -y install g++ cmake libbz2-dev libaio-dev bison zlib1g-dev libsnappy-dev
sudo apt-get -y install libgflags-dev libreadline6-dev libncurses5-dev libssl-dev liblz4-dev gdb git
;git clone https://github.com/MariaDB/server.git mariadb-10.2
cd mariadb-10.2
git checkout 10.2
git submodule init
git submodule update
cmake .
make -j10cat > ~/my1.cnf <<EOF
[mysqld]
datadir=../mysql-test/var/install.db
plugin-dir=../storage/rocksdb
language=./share/english
socket=/tmp/mysql.sock
port=3307
plugin-load=ha_rocksdb
default-storage-engine=rocksdb
EOF(cd mysql-test; ./mtr alias)
cp -r mysql-test/var/install.db ~/data1
cd ../sql
./mysqld --defaults-file=~/my1.cnfSnappy,Zlib,LZ4,LZ4HCDetails on using the CHECK TABLE statement with MyRocks to verify the integrity of tables and indexes, and how it differs from other engines.
MyRocks supports the CHECK TABLE command.
The command will do a number of checks to verify that the table data is self-consistent.
The details about the errors are printed into the error log. If log_warnings > 2, the error log will also have some informational messages which can help with troubleshooting.
Besides this, RocksDB has its own (low-level) log in #rocksdb/LOG file.
This page is licensed: CC BY-SA / Gnu FDL
A guide to diagnosing and resolving performance issues in MyRocks using status variables, `SHOW ENGINE ROCKSDB STATUS`, and RocksDB performance context.
MyRocks exposes its performance metrics through several interfaces:
Status variables
SHOW ENGINE ROCKSDB STATUS
RocksDB's perf context
the contents slightly overlap, but each source has its own unique information, so be sure to check all three.
Check the output of
See for more information.
This produces a lot of information.
One particularly interesting part is compaction statistics. It shows the amount of data on each SST level and other details:
RocksDB has an internal mechanism called "perf context". The counter values are exposed through two tables:
- global counters
- Per-table/partition counters
By default statistics are NOT collected. One needs to set to some value (e.g. 3) to enable collection.
This page is licensed: CC BY-SA / Gnu FDL
Best practices and methods for efficiently loading large datasets into MyRocks tables, including using bulk loading features to improve performance.
Being a write-optimized storage engine, MyRocks has special ways to load data much faster than normal INSERTs would.
See:
the section about "Migrating from InnoDB to MyRocks in production" has some clues.
Data-Loading covers the topic in greater detail.
This page is licensed: CC BY-SA / Gnu FDL
MyRocks storage engine transactional isolatioin.
MyRocks uses snapshot isolation.
Support do READ-COMMITTED and REPEATABLE-READ .
SERIALIZABLE is not supported.
There is no "gap locking" which makes Statement Based Replication unsafe (see MyRocks and Replication).
This page is licensed: CC BY-SA / Gnu FDL
ERROR 1105 (HY000): [./.rocksdb/test.t1_PRIMARY_2_0.bulk_load.tmp] bulk load error:
Invalid argument: External file requires flushMyRocks storage engine transactions with consistent snapshot.
FB/MySQL has added new syntax:
START TRANSACTION WITH CONSISTENT ROCKSDB|INNODB SNAPSHOT;The statement returns the binlog coordinates pointing at the snapshot.
MariaDB (and Percona Server) support extension to the regular
START TRANSACTION WITH CONSISTENT SNAPSHOT;syntax as documented in Enhancements for START TRANSACTION WITH CONSISTENT SNAPSHOT.
After issuing the statement, one can examine the binlog_snapshot_file and binlog_snapshot_position status variables to see the binlog position that corresponds to the snapshot.
This page is licensed: CC BY-SA / Gnu FDL
Differences between variants of the MyRocks storage engine.
MyRocks is available in
Facebook's (FB) MySQL branch (originally based on MySQL 5.6)
MariaDB (from 10.2 and 10.3)
Percona Server from 5.7
This page lists differences between these variants.
This is a work in progress. The contents are not final
Understand how MyRocks implements group commit to coordinate with the binary log, ensuring data consistency and crash safety for replicated transactions.
MyRocks supports group commit with the ().
(The following is only necessary if you are studying MyRocks internals)
MariaDB's group commit counters are:
- how many transactions were written to the binary log
- how many group commits happened. (e.g. if each group had two transactions, this is twice as small as )
Learn how to configure and use Bloom filters in MyRocks to speed up point lookups by probabilistically determining if a key exists in a data file.
SHOW STATUS LIKE 'Rocksdb%'*************************** 4. row ***************************
Type: CF_COMPACTION
Name: default
Status:
** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------
L0 3/0 30.16 MB 1.0 0.0 0.0 0.0 11.9 11.9 0.0 1.0 0.0 76.6 159 632 0.251 0 0
L1 5/0 247.54 MB 1.0 0.7 0.2 0.5 0.5 0.0 11.6 2.6 58.5 44.1 12 4 2.926 30M 10M
L2 112/0 2.41 GB 1.0 0.6 0.0 0.6 0.5 -0.1 11.4 43.4 55.2 45.9 11 1 10.827 21M 3588K
L3 466/0 8.91 GB 0.4 0.0 0.0 0.0 0.0 0.0 8.9 0.0 0.0 0.0 0 0 0.000 0 0
Sum 586/0 11.59 GB 0.0 1.3 0.2 1.0 12.8 11.8 32.0 1.1 7.1 72.6 181 637 0.284 52M 13M
Int 0/0 0.00 KB 0.0 0.9 0.1 0.8 0.8 0.0 0.1 20.5 48.4 45.3 19 6 3.133 33M 3588KOn the RocksDB side, there is one relevant counter:Rocksdb_wal_synced - How many times RocksDB's WAL file was synced. (TODO: this is after group commit happened, right?)
FB/MySQL-5.6 has a rocksdb_wal_group_syncs counter (The counter is provided by MyRocks, it is not a view of a RocksDB counter). It is increased in rocksdb_flush_wal() when doing the rdb->FlushWAL() call.
rocksdb_flush_wal() is called by MySQL's Group Commit when it wants to make the effect of several rocksdb_prepare() calls persistent.
So, the value of rocksdb_wal_group_syncs in FB/MySQL-5.6 is similar to Binlog_group_commits in MariaDB.
MariaDB doesn't have that call, each rocksdb_prepare() call takes care of being persistent on its own.
Because of that, rocksdb_wal_group_syncs is zero for MariaDB. (Currently, it is only incremented when the binlog is rotated).
So for a workload with concurrency=50, n_queries=10K, one gets
Binlog_commits=10K
Binlog_group_commits=794
Rocksdb_wal_synced=8362
This is on a RAM disk
For a workload with concurrency=50, n_queries=10K, rotating laptop hdd, one gets
Binlog_commits= 10K
Binlog_group_commits=1403
Rocksdb_wal_synced=400
The test took 38 seconds, Number of syncs was 1400+400=1800, which gives 45 syncs/sec which looks normal for this slow rotating desktop hdd.
Note that the WAL was synced fewer times than there were binlog commit groups (?)
This page is licensed: CC BY-SA / Gnu FDL
whole_key_filtering=true/false
Whether the bloom filter is for the entire key or for the prefix. In case of a prefix, you need to look at the index definition and compute the desired prefix length.
It's 4 bytes for index_nr
Then, for fixed-size columns (integer, date[time], decimal) it is key_length as shown by EXPLAIN. For VARCHAR columns, determining the length is tricky (It depends on the values stored in the table. Note that MyRocks encodes VARCHARs with "Variable-Length Space-Padded Encoding" format).
To enable 10-bit bloom filter for 8-byte prefix length for column family "cf1", put this into my.cnf:
and restart the server.
Check if the column family actually uses the bloom filter:
Watch these status variables:
Other useful variables are:
rocksdb_force_flush_memtable_now - bloom filter is only used when reading data from disk. If you are doing testing, flush the data to disk first.
rocksdb_skip_bloom_filter_on_read - skip using the bloom filter (default is FALSE).
This page is licensed: CC BY-SA / Gnu FDL
rocksdb_override_cf_options='cf1={block_based_table_factory={filter_policy=bloomfilter:10:false;whole_key_filtering=0;};prefix_extractor=capped:8};'SELECT *
FROM information_schema.rocksdb_cf_options
WHERE
cf_name='cf1' AND
option_type IN ('TABLE_FACTORY::FILTER_POLICY','PREFIX_EXTRACTOR');+---------+------------------------------+----------------------------+
| CF_NAME | OPTION_TYPE | VALUE |
+---------+------------------------------+----------------------------+
| cf1 | PREFIX_EXTRACTOR | rocksdb.CappedPrefix.8 |
| cf1 | TABLE_FACTORY::FILTER_POLICY | rocksdb.BuiltinBloomFilter |
+---------+------------------------------+----------------------------+SHOW status LIKE '%bloom%';
+-------------------------------------+-------+
| Variable_name | Value |
+-------------------------------------+-------+
| Rocksdb_bloom_filter_prefix_checked | 1 |
| Rocksdb_bloom_filter_prefix_useful | 0 |
| Rocksdb_bloom_filter_useful | 0 |
+-------------------------------------+-------+FB and Percona store RocksdDB files in $datadir/.rocksdb. MariaDB puts them in $datadir/#rocksdb. This is more friendly for packaging and OS scripts.
FB's branch doesn't provide binaries. One needs to compile it with appropriate compression libraries.
In MariaDB, available compression algorithms can be seen in the rocksdb_supported_compression_types variable. From , algorithms can be installed as a plugin. In earlier versions, the set of supported compression algorithms depends on the platform.
On Ubuntu 16.04 (current LTS) it is Snappy,Zlib,LZ4,LZ4HC .
On CentOS 7.4 it is Snappy,Zlib.
In the bintar tarball it is Snappy,Zlib.
Percona Server supports: Zlib, ZSTD, LZ4 (the default), LZ4HC. Unsupported algorithms: Snappy, BZip2, XPress.
FB's branch provides the rocksdb_git_hash status variable.
MariaDB provides the @@rocksdb_git_hash system variable.
Percona Server doesn't provide either.
Facebook's branch uses RocksDB 5.10.0 (the version number can be found in include/rocksdb/version.h)
MariaDB currently uses 5.8.0
Percona Server uses 5.8.0
FB branch provides information_schema.rocksdb_global_info type=BINLOG, NAME={FILE, POS, GTID}.
Percona Server doesn't provide it.
MariaDB doesn't provide it.
One use of that information is to take the output of myrocks_hotbackup and make it a new master.
FB branch has a "Gap Lock Detector" feature. It is at the SQL layer. It can be controlled with gap_lock_XXX variables and is disabled by default (gap-lock-raise-error=false, gap-lock-write-lock=false).
Percona Server has gap lock checking ON but doesn't seem to have any way to control it? Queries that use Gap Lock on MyRocks fail with an error like this:
MariaDB doesn't include the Gap Lock Detector.
Both MariaDB and Percona Server support generated columns, but neither one supports them for the MyRocks storage engine (attempts to create a table will produce an error).
Invisible columns in are supported (as they are an SQL layer feature).
Facebook's branch has a performance feature for replication slaves, rpl_skip_tx_api. It is not available in MariaDB or in Percona Server.
The above comparison was made using
FB/MySQL 5.6.35
Percona Server 5.7.20-19-log
(MyRocks is beta)
This page is licensed: CC BY-SA / Gnu FDL
Explore the data compression options available in MyRocks, including different algorithms (Zstd, LZ4, etc.) and how to configure them per column family.
MyRocks supports several compression algorithms.
Supported compression algorithms can be checked like so:
Another way to make the check is to look into #rocksdb/LOG file in the data directory. It should have lines like:
Compression is set on a per-Column Family basis. See .
To check current compression settings for a column family one can use a query like so:
The output are like:
Current column family settings are used for the new SST files.
Compression settings are not dynamic parameters, one cannot change them by setting .
The procedure to change compression settings is as follows:
Edit my.cnf to set .
Example:
Restart the server.
The data will not be re-compressed immediately. However, all new SST files will use the new compression settings, so as data gets inserted/updated the column family will gradually start using the new option.
Please note that rocksdb-override-cf-options syntax is quite strict. Any typos will result in the parse error, and MyRocks plugin will not be loaded. Depending on your configuration, the server may still start. If it does start, you can use this command to check if the plugin is loaded:
(note that you need the "ROCKSDB" plugin. Other auxiliary plugins like "ROCKSDB_TRX" might still get loaded).
Another way is to detect the error is check the error log. When option parsing fails, it will contain messages like so:
A query to check what compression is used in the SST files that store the data for a given table (test.t1):
Example output:
This page is licensed: CC BY-SA / Gnu FDL
MyRocks storage engine statistics for the query optimizer.
This article describes how MyRocks storage engine provides statistics to the query optimizer.
There are three kinds of statistics:
Table statistics (number of rows in the table, average row size)
Index cardinality (how distinct values are in the index)
records-in-range estimates (how many rows are in a certain range "const1 < tbl.key < const2".
MyRocks (actually RocksDB) uses LSM files which are written once and never updated. When an LSM file is written, MyRocks will compute index cardinalities and number-of-rows for the data in the file. (The file generally has rows, index records and/or tombstones for multiple tables/indexes).
For performance reasons, statistics are computed based on a fraction of rows in the LSM file. The percentage of rows used is controlled by ; the default value is 10%.
Before the data is dumped into LSM file, it is stored in the MemTable. MemTable doesn't allow computing index cardinalities, but it can provide an approximate number of rows in the table. Use of MemTable data for statistics is controlled by ; the default value is ON.
Those who create/run MTR tests, need to know whether EXPLAIN output is deterministic. For MyRocks tables, the answer is NO (just like for InnoDB).
Statistics are computed using sampling and GetApproximateMemTableStats() which means that the #rows column in the EXPLAIN output may vary slightly.
MyRocks uses RocksDB's GetApproximateSizes() call to produce an estimate for the number of rows in the certain range. The data in MemTable is also taken into account by issuing a GetApproximateMemTableStats call.
ANALYZE TABLE will possibly flush the MemTable (depending on the and settings).
After that, it will re-read statistics from the SST files and re-compute the summary numbers (TODO: and if the data was already on disk, the result should not be different from the one we had before ANALYZE?)
There are a few variables that will cause MyRocks to report certain pre-defined estimate numbers to the optimizer:
@@rocksdb_records_in_range - if not 0, report that any range has this many rows
@@rocksdb_force_index_records_in_range - if not 0, and FORCE INDEX hint is used, report that any range has this many rows.
@@rocksdb_debug_optimizer_n_rows - if not 0, report that any MyRocks table has this many rows.
This page is licensed: CC BY-SA / Gnu FDL
MyRocks storage engine scans that only use indexes.
This article is about and index-only scans on secondary indexes. It applies to MariaDB's MyRocks, Facebook's MyRocks, and other variants.
The primary key in MyRocks is always the clustered key, that is, the index record is THE table record and so it's not possible to do "index only" because there isn't anything that is not in the primary key's (Key,Value) pair.
Secondary keys may or may not support index-only scans, depending on the datatypes of the columns that the query is trying to read.
commit ba295cda29daee3ffe58549542804efdfd969784
Author: Andrew Kryczka <andrewkr@fb.com>
Date: Fri Jan 12 11:03:55 2018 -0800commit 9a970c81af9807071bd690f4c808c5045866291a
Author: Yi Wu <yiwu@fb.com>
Date: Wed Sep 13 17:21:35 2017 -0700commit ab0542f5ec6e7c7e405267eaa2e2a603a77d570b
Author: Maysam Yabandeh <myabandeh@fb.com>
Date: Fri Sep 29 07:55:22 2017 -0700INSERT INTO tbl2 SELECT * FROM tbl1;
ERROR 1105 (HY000): Using Gap Lock without full unique key in multi-table or multi-statement transactions
is not allowed. You need to either rewrite queries to use all unique key columns in WHERE equal conditions,
or rewrite to single-table, single-statement transaction. Query: insert into tbl2 select * from tbl1SHOW variables LIKE 'rocksdb%compress%';
+-------------------------------------+------------------------------------+
| Variable_name | Value |
+-------------------------------------+------------------------------------+
| rocksdb_supported_compression_types | Snappy,Zlib,LZ4,LZ4HC,ZSTDNotFinal |
+-------------------------------------+------------------------------------+2019/04/12-14:08:23.869919 7f839188b540 Compression algorithms supported:
2019/04/12-14:08:23.869920 7f839188b540 kZSTDNotFinalCompression supported: 1
2019/04/12-14:08:23.869922 7f839188b540 kZSTD supported: 1
2019/04/12-14:08:23.869923 7f839188b540 kXpressCompression supported: 0
2019/04/12-14:08:23.869924 7f839188b540 kLZ4HCCompression supported: 1
2019/04/12-14:08:23.869924 7f839188b540 kLZ4Compression supported: 1
2019/04/12-14:08:23.869925 7f839188b540 kBZip2Compression supported: 0
2019/04/12-14:08:23.869926 7f839188b540 kZlibCompression supported: 1
2019/04/12-14:08:23.869927 7f839188b540 kSnappyCompression supported: 1MyRocks indexes store "mem-comparable keys" (that is, the key values are compared with memcmp). For some datatypes, it is easily possible to convert between the column value and its mem-comparable form, while for others the conversion is one-way.
For example, in case-insensitive collations capital and regular letters are considered identical, i.e. 'c' ='C'. For some datatypes, MyRocks stores some extra data which allows it to restore the original value back. (For the latin1_general_ci collation and character 'c', for example, it will store one bit which says whether the original value was a small 'c' or a capital letter 'C'). This doesn't work for all datatypes, though.
Index-only scans are supported for numeric and date/time datatypes. For CHAR and VAR[CHAR], it depends on which collation is used, see below for details.
Index-only scans are currently not supported for less frequently used datatypes, like
ENUM(...) It is actually possible to add support for those, feel free to write a patch or at least make a case why a particular datatype is important
As far as Index-only support is concerned, MyRocks distinguishes three kinds of collations:
These are binary, latin1_bin, and utf8_bin.
For these collations, it is possible to convert a value back from its mem-comparable form. Hence, one can restore the original value back from its index record, and index-only scans are supported.
These are collations where one can store some extra information which helps to restore the original value.
Criteria (from storage/rocksdb/rdb_datadic.cc, rdb_is_collation_supported()) are:
The charset should use 1-byte characters (so, unicode-based collations are not included)
strxfrm(1 byte) = {one 1-byte weight value always}
no binary sorting
PAD attribute
The examples are: latin1_general_ci, latin1_general_cs, latin1_swedish_ci, etc.
Index-only scans are supported for these collations.
For these collations, there is no known way to restore the value from its mem-comparable form, and so index-only scans are not supported.
MyRocks needs to fetch the clustered PK record to get the field value.
TODO: there is also this optimization:
document it.
This page is licensed: CC BY-SA / Gnu FDL
SELECT * FROM information_schema.rocksdb_cf_options
WHERE option_type LIKE '%ompression%' AND cf_name='DEFAULT';+---------+-----------------------------------------+---------------------------+
| CF_NAME | OPTION_TYPE | VALUE |
+---------+-----------------------------------------+---------------------------+
| default | COMPRESSION_TYPE | kSnappyCompression |
| default | COMPRESSION_PER_LEVEL | NUL |
| default | COMPRESSION_OPTS | -14:32767:0 |
| default | BOTTOMMOST_COMPRESSION | kDisableCompressionOption |
| default | TABLE_FACTORY::VERIFY_COMPRESSION | 0 |
| default | TABLE_FACTORY::ENABLE_INDEX_COMPRESSION | 1 |
+---------+-----------------------------------------+---------------------------+rocksdb-override-cf-options='cf1={compression=kZSTD;bottommost_compression=kZSTD;}'SELECT * FROM information_schema.plugins WHERE plugin_name='ROCKSDB'2019-04-16 11:07:57 140283675678016 [Warning] Invalid cf config for cf1 in override options (options: cf1={compression=kLZ4Compression;bottommost_compression=kZSTDCompression;})
2019-04-16 11:07:57 140283675678016 [ERROR] RocksDB: Failed to initialize CF options map.
2019-04-16 11:07:57 140283675678016 [ERROR] Plugin 'ROCKSDB' init function returned error.
2019-04-16 11:07:57 140283675678016 [ERROR] Plugin 'ROCKSDB' registration as a STORAGE ENGINE failed.SELECT
SP.sst_name, SP.compression_algo
FROM
information_schema.rocksdb_sst_props SP,
information_schema.rocksdb_ddl D,
information_schema.rocksdb_index_file_map IFM
WHERE
D.table_schema='test' AND D.table_name='t1' AND
D.index_number= IFM.index_number AND
IFM.sst_name=SP.sst_name;+------------+------------------+
| sst_name | compression_algo |
+------------+------------------+
| 000028.sst | Snappy |
| 000028.sst | Snappy |
| 000026.sst | Snappy |
| 000026.sst | Snappy |
+------------+------------------+Learn about MyRocks column families, a mechanism for grouping data similar to tablespaces, to optimize compression and Bloom filter settings per family.
MyRocks stores data in column families. These are similar to tablespaces.
By default, the data is stored in the default column family.
One can specify which column family the data goes to by using index comments:
If the column name starts with rev:, the column family is reverse-ordered.
Storage parameters like
Bloom filter settings
Compression settings
Whether the data is stored in reverse order
are specified on a per-column family basis.
When creating a table or index, you can specify the name of the column family for it. If the column family doesn't exist, it are automatically created.
There is currently no way to drop a column family. RocksDB supports this internally but MyRocks doesn't provide any way to do it.
Use these variables:
- a my.cnf parameter specifying default options for all column families.
- a my.cnf parameter specifying per-column family option overrides.
- a dynamically-settable variable which allows to change parameters online. Not all parameters can be changed.
This parameter allows one to override column family options for specific column families. Here is an example of how to set option1=value1 and option2=value for column family cf1, and option3=value3 for column family cf3:
One can check the contents of INFORMATION_SCHEMA.ROCKSDB_CF_OPTIONS to see what options are available.
Options that are frequently configured are:
Data compression. See .
Bloom Filters. See .
See the table.
This page is licensed: CC BY-SA / Gnu FDL
This page details how MyRocks integrates with MariaDB replication, specifically addressing limitations with statement-based replication and lack of Gap Lock support.
Details about how MyRocks works with .
replication (SBR) works as follows: SQL statements are executed on the master (possibly concurrently). They are written into the binlog (this fixes their ordering, "a serialization"). The slave then reads the binlog and executes the statements in their binlog order.
In order to prevent data drift, serial execution of statements on the slave must have the same effect as concurrent execution of these statements on the master. In other words, transaction isolation on the master must be close to SERIALIZABLE transaction isolation level (This is not a strict mathematical proof but shows the idea).
INDEX index_name(col1, col2, ...) COMMENT 'column_family_name'InnoDB achieves this by (almost) supporting SERIALIZABLE transactional isolation level. It does so by supporting "Gap Locks". MyRocks doesn't support SERIALIZABLE isolation, and it doesn't support gap locks.
Because of that, generally one cannot use MyRocks and statement-based replication.
Updating a MyRocks table while having SBR on, will result in an error as follow:
Yes. In many cases, database applications run a restricted set of SQL statements, and it's possible to prove that lack of Gap Lock support is not a problem and data skew will not occur.
In that case, one can set @@rocksdb_unsafe_for_binlog=1 and MyRocks will work with SBR. The user is however responsible for making sure their queries are not causing a data skew.
MyRocks upstream (that is, Facebook's MySQL branch) has a number of unique replication enhancements. These are available in upstream's version of MyRocks but not in MariaDB's version of MyRocks.
Read-Free Replication (see Read-Free-Replication) TODO
<<unique_check_lag_threshold>>. This is FB/MySQL-5.6 feature where unique checks are disabled if replication lag exceeds a certain threshold.
<<slave_gtid_info=OPTIMIZED>>. This is said to be:
This page is licensed: CC BY-SA / Gnu FDL
rocksdb_override_cf_options='cf1={option1=value1;option2=value2};cf2={option3=value3}'ERROR 4056 (HY000): Can't execute updates on master with binlog_format != ROW.<<quote>>
"Whether SQL threads update mysql.slave_gtid_info table. If this value "
"is OPTIMIZED, updating the table is done inside storage engines to "
"avoid MySQL layer's performance overhead",
<</quote>>A list of MyRocks-specific status variables providing metrics on cache hits, compaction statistics, block cache usage, and other internal operations.
This page documents status variables related to the MyRocks storage engine. See Server Status Variables for a complete list of status variables that can be viewed with SHOW STATUS.
See also the Full list of MariaDB options, system and status variables.
Rocksdb_block_cache_addDescription: Number of blocks added to the Block Cache.
Scope: Global, Session
Data Type: numeric
Rocksdb_block_cache_add_failuresDescription: Number of failures when adding blocks to Block Cache.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_block_cache_bytes_readDescription: Bytes read from Block Cache.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_block_cache_bytes_writeDescription: Bytes written to Block Cache.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_block_cache_data_addDescription: Number of data blocks added to the Block Cache.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_block_cache_data_bytes_insertDescription: Bytes added to the Block Cache.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_block_cache_data_hitDescription: Number of hits when accessing the data block from the Block Cache.
Scope: Global, Session
Data Type: numeric
Rocksdb_block_cache_data_missDescription: Number of misses when accessing the data block from the Block Cache.
Scope: Global, Session
Data Type: numeric
Rocksdb_block_cache_filter_addDescription: Number of bloom filter blocks added to the Block Cache.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_block_cache_filter_bytes_evictDescription: Bytes of bloom filter blocks evicted from the Block Cache.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_block_cache_filter_bytes_insertDescription: Bytes of bloom filter blocks added to the Block Cache.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_block_cache_filter_hitDescription: Number of hits when accessing the filter block from the Block Cache.
Scope: Global, Session
Data Type: numeric
Rocksdb_block_cache_filter_missDescription: Number of misses when accessing the filter block from the Block Cache.
Scope: Global, Session
Data Type: numeric
Rocksdb_block_cache_hitDescription: Total number of hits for the Block Cache.
Scope: Global, Session
Data Type: numeric
Rocksdb_block_cache_index_addDescription: Number of index blocks added to Block Cache index.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_block_cache_index_bytes_evictDescription: Bytes of index blocks evicted from the Block Cache.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_block_cache_index_bytes_insertDescription: Bytes of index blocks added to the Block Cache.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_block_cache_index_hitDescription: Number of hits for the Block Cache index.
Scope: Global, Session
Data Type: numeric
Rocksdb_block_cache_index_missDescription: Number of misses for the Block Cache index.
Scope: Global, Session
Data Type: numeric
Rocksdb_block_cache_missDescription: Total number of misses for the Block Cache.
Scope: Global, Session
Data Type: numeric
Rocksdb_block_cachecompressed_hitDescription: Number of hits for the compressed Block Cache.
Scope: Global, Session
Data Type: numeric
Rocksdb_block_cachecompressed_missDescription: Number of misses for the compressed Block Cache.
Scope: Global, Session
Data Type: numeric
Rocksdb_bloom_filter_full_positiveDescription:
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_bloom_filter_full_true_positiveDescription:
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_bloom_filter_prefix_checkedDescription: Number of times the Bloom Filter checked before creating an iterator on a file.
Scope: Global, Session
Data Type: numeric
Rocksdb_bloom_filter_prefix_usefulDescription: Number of times the Bloom Filter check used to avoid creating an iterator on a file.
Scope: Global, Session
Data Type: numeric
Rocksdb_bloom_filter_usefulDescription: Number of times the Bloom Filter used instead of reading form file.
Scope: Global, Session
Data Type: numeric
Rocksdb_bytes_readDescription: Total number of uncompressed bytes read from memtables, cache or table files.
Scope: Global, Session
Data Type: numeric
Rocksdb_bytes_writtenDescription: Total number of uncompressed bytes written.
Scope: Global, Session
Data Type: numeric
Rocksdb_compact_read_bytesDescription: Number of bytes read during compaction.
Scope: Global, Session
Data Type: numeric
Rocksdb_compact_write_bytesDescription: Number of bytes written during compaction.
Scope: Global, Session
Data Type: numeric
Rocksdb_compaction_key_drop_newDescription: Number of keys dropped during compaction due their being overwritten by new values.
Scope: Global, Session
Data Type: numeric
Rocksdb_compaction_key_drop_obsoleteDescription: Number of keys dropped during compaction due to their being obsolete.
Scope: Global, Session
Data Type: numeric
Rocksdb_compaction_key_drop_userDescription: Number of keys dropped during compaction due to user compaction.
Scope: Global, Session
Data Type: numeric
Rocksdb_covered_secondary_key_lookupsDescription: Incremented when avoiding reading a record via a keyread. This indicates lookups that were performed via a secondary index containing a field that is only a prefix of the column, and that could return all requested fields directly from the secondary index.
Scope: Global, Session
Data Type: numeric
Rocksdb_flush_write_bytesDescription: Number of bytes written during flush.
Scope: Global, Session
Data Type: numeric
Rocksdb_get_hit_l0Description: Number of times reads got data from the L0 compaction layer.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_get_hit_l1Description: Number of times reads got data from the L1 compaction layer.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_get_hit_l2_and_upDescription: Number of times reads got data from the L2 and up compaction layer.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_getupdatessince_callsDescription: Number of calls to the GetUpdatesSince function. You may find this useful when monitoring refreshes of the transaction log.
Scope: Global, Session
Data Type: numeric
Rocksdb_iter_bytes_readDescription: Total uncompressed bytes read from an iterator, including the size of both key and value.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_l0_num_files_stall_microsDescription: Shows how long in microseconds throttled due to too mnay files in L0.
Scope: Global, Session
Data Type: numeric
Removed: ,
Rocksdb_l0_slowdown_microsDescription: Total time spent waiting in microseconds while performing L0-L1 compactions.
Scope: Global, Session
Data Type: numeric
Removed: ,
Rocksdb_manual_compactions_processedDescription:
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_manual_compactions_runningDescription:
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_memtable_compaction_microsDescription:
Scope: Global, Session
Data Type: numeric
Removed: ,
Rocksdb_memtable_hitDescription: Number of memtable hits.
Scope: Global, Session
Data Type: numeric
Rocksdb_memtable_missDescription: Number of memtable misses.
Scope: Global, Session
Data Type: numeric
Rocksdb_memtable_totalDescription: Memory used, in bytes, of all memtables.
Scope: Global, Session
Data Type: numeric
Rocksdb_memtable_unflushedDescription: Memory used, in bytes, of all unflushed memtables.
Scope: Global, Session
Data Type: numeric
Rocksdb_no_file_closesDescription: Number of times files were closed.
Scope: Global, Session
Data Type: numeric
Rocksdb_no_file_errorsDescription: Number of errors encountered while trying to read data from an SST file.
Scope: Global, Session
Data Type: numeric
Rocksdb_no_file_opensDescription: Number of times files were opened.
Scope: Global, Session
Data Type: numeric
Rocksdb_num_iteratorsDescription: Number of iterators currently open.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_block_not_compressedDescription: Number of uncompressed blocks.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_db_nextDescription: Number of next calls.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_number_db_next_foundDescription: Number of next calls that returned data.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_number_db_prevDescription: Number of prev calls.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_number_db_prev_foundDescription: Number of prev calls that returned data.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_number_db_seekDescription: Number of seek calls.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_number_db_seek_foundDescription: Number of seek calls that returned data.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_number_deletes_filteredDescription: Number of deleted records were not written to storage due to a nonexistent key.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_keys_readDescription: Number of keys have been read.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_keys_updatedDescription: Number of keys have been updated.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_keys_writtenDescription: Number of keys have been written.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_merge_failuresDescription: Number of failures encountered while performing merge operator actions.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_multiget_bytes_readDescription: Number of bytes read during RocksDB MultiGet() calls.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_multiget_getDescription: Number of RocksDB MultiGet() requests made.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_multiget_keys_readDescription: Number of keys read through RocksDB MultiGet() calls.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_reseeks_iterationDescription: Number of reseeks that have occurred inside an iteration that skipped over a large number of keys with the same user key.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_sst_entry_deleteDescription: Number of delete markers written.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_sst_entry_mergeDescription: Number of merge keys written.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_sst_entry_otherDescription: Number of keys written that are not delete, merge or put keys.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_sst_entry_putDescription: Number of put keys written.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_sst_entry_singledeleteDescription: Number of single-delete keys written.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_superversion_acquiresDescription: Number of times the superversion structure acquired. This is useful when tracking files for the database.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_superversion_cleanupsDescription: Number of times the superversion structure performed cleanups.
Scope: Global, Session
Data Type: numeric
Rocksdb_number_superversion_releasesDescription: Number of times the superversion structure released.
Scope: Global, Session
Data Type: numeric
Rocksdb_queries_pointDescription: Number of single-row queries.
Scope: Global, Session
Data Type: numeric
Rocksdb_queries_rangeDescription: Number of multi-row queries.
Scope: Global, Session
Data Type: numeric
Rocksdb_row_lock_deadlocksDescription: Number of deadlocks.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_row_lock_wait_timeoutsDescription: Number of row lock wait timeouts.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_rows_deletedDescription: Number of rows deleted.
Scope: Global, Session
Data Type: numeric
Rocksdb_rows_deleted_blindDescription:
Scope: Global, Session
Data Type: numeric
Rocksdb_rows_expiredDescription: Number of expired rows.
Scope: Global, Session
Data Type: numeric
Rocksdb_rows_filteredDescription: Number of TTL filtered rows.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_rows_insertedDescription: Number of rows inserted.
Scope: Global, Session
Data Type: numeric
Rocksdb_rows_readDescription: Number of rows read.
Scope: Global, Session
Data Type: numeric
Rocksdb_rows_updatedDescription: Number of rows updated.
Scope: Global, Session
Data Type: numeric
Rocksdb_snapshot_conflict_errorsDescription: Number of snapshot conflict errors that have occurred during transactions that forced a rollback.
Scope: Global, Session
Data Type: numeric
Rocksdb_stall_l0_file_count_limit_slowdownsDescription: Write slowdowns due to L0 being near to full.
Scope: Global, Session
Data Type: numeric
Rocksdb_stall_l0_file_count_limit_stopsDescription: Write stops due to L0 being to full.
Scope: Global, Session
Data Type: numeric
Rocksdb_stall_locked_l0_file_count_limit_slowdownsDescription: Write slowdowns due to L0 being near to full and L0 compaction in progress.
Scope: Global, Session
Data Type: numeric
Rocksdb_stall_locked_l0_file_count_limit_stopsDescription: Write stops due to L0 being full and L0 compaction in progress.
Scope: Global, Session
Data Type: numeric
Rocksdb_stall_memtable_limit_slowdownsDescription: Write slowdowns due to approaching maximum permitted number of memtables.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_stall_memtable_limit_stopsDescription: * Description: Write stops due to reaching maximum permitted number of memtables.
Scope: Global, Session
Data Type: numeric
Introduced: ,
Rocksdb_stall_microsDescription: Time in microseconds that the writer had to wait for the compaction or flush to complete.
Scope: Global, Session
Data Type: numeric
Rocksdb_stall_pending_compaction_limit_slowdownsDescription: Write slowdowns due to nearing the limit for the maximum number of pending compaction bytes.
Scope: Global, Session
Data Type: numeric
Rocksdb_stall_pending_compaction_limit_stopsDescription: Write stops due to reaching the limit for the maximum number of pending compaction bytes.
Scope: Global, Session
Data Type: numeric
Rocksdb_stall_total_slowdownsDescription: Total number of write slowdowns.
Scope: Global, Session
Data Type: numeric
Rocksdb_stall_total_stopsDescription: Total number of write stops.
Scope: Global, Session
Data Type: numeric
Rocksdb_system_rows_deletedDescription: Number of rows deleted from system tables.
Scope: Global, Session
Data Type: numeric
Rocksdb_system_rows_insertedDescription: Number of rows inserted into system tables.
Scope: Global, Session
Data Type: numeric
Rocksdb_system_rows_readDescription: Number of rows read from system tables.
Scope: Global, Session
Data Type: numeric
Rocksdb_system_rows_updatedDescription: Number of rows updated for system tables.
Scope: Global, Session
Data Type: numeric
Rocksdb_wal_bytesDescription: Number of bytes written to WAL.
Scope: Global, Session
Data Type: numeric
Rocksdb_wal_group_syncsDescription: Number of group commit WAL file syncs have occurred. This is provided by MyRocks and is not a view of a RocksDB counter. Increased in rocksdb_flush_wal() when doing the rdb->FlushWAL() call.
Scope: Global, Session
Data Type: numeric
Rocksdb_wal_syncedDescription: Number of syncs made on RocksDB WAL file.
Scope: Global, Session
Data Type: numeric
Rocksdb_write_otherDescription: Number of writes processed by a thread other than the requesting thread.
Scope: Global, Session
Data Type: numeric
Rocksdb_write_selfDescription: Number of writes processed by requesting thread.
Scope: Global, Session
Data Type: numeric
Rocksdb_write_timedoutDescription: Number of writes that timed out.
Scope: Global, Session
Data Type: numeric
Rocksdb_write_walDescription: Number of write calls that requested WAL.
Scope: Global, Session
Data Type: numeric
This page is licensed: CC BY-SA / Gnu FDL
MyRocks is a storage engine based on RocksDB, optimized for high-write workloads and flash storage, offering superior compression and reduced write amplification.
MyRocks is an open source storage engine that was originally developed by Facebook.
MyRocks has been extended by the MariaDB engineering team to be a pluggable storage engine that you use in your MariaDB solutions. It works seamlessly with MariaDB features. This openness in the storage layer allows you to use the right storage engine to optimize your usage requirements, which provides optimum performance. Community contributions are one of MariaDB’s greatest advantages over other databases. Under the lead of our developer Sergey Petrunia, MyRocks in MariaDB is occasionally being merged with upstream MyRocks from Facebook. See more at: facebook-myrocks-mariadb#sthash.ZlEr7kNq.dpuf
MyRocks, typically, gives greater performance for web scale type applications. It can be an ideal storage engine solution when you have workloads that require greater compression and IO efficiency. It uses a Log Structured Merge (LSM) architecture, which has advantages over B-Tree algorithms, to provide efficient data ingestion, like read-free replication slaves, or fast bulk data loading. MyRocks distinguishing features include:
compaction filter
merge operator
backup
column families
For more MyRocks features see:
On production workloads, MyRocks was tested to prove that it provides:
Greater Space Efficiency
2x more compression MyRocks has 2x better compression compared to compressed InnoDB, 3-4x better compression compared to uncompressed InnoDB, meaning you use less space.
Greater Writing Efficiency
2x lower write rates to storage MyRocks has a 10x less write amplification compared to InnoDB, giving you better endurance of flash storage and improving overall throughput.
Faster Data Loading
faster database loads MyRocks writes data directly onto the bottommost level, which avoids all compaction overheads when you enable faster data loading for a session.
Faster Replication
No random reads for updating secondary keys, except for unique indexes. The Read-Free Replication option does away with random reads when updating primary keys, regardless of uniqueness, with a row-based binary logging format.
MyRocks is included from .
MyRocks is available in the MariaDB Server packages for Linux and Windows.
Maria DB optimistic parallel replication may not be supported.
MyRocks is not available for 32-bit platforms
MyRocks builds are available on platforms that support a sufficiently modern compiler, for example:
Ubuntu Trusty, Xenial, (amd64 and ppc64el)
Ubuntu Yakkety (amd64)
Debian Jessie, stable (amd64, ppc64el)
Debian Stretch, Sid (testing and unstable) (amd64)
This page is licensed: CC BY-SA / Gnu FDL
A guide to installing and configuring MyRocks, including enabling the plugin, setting up basic tables, and understanding key configuration parameters.
MyRocks is a storage engine that adds the RocksDB database to MariaDB. RocksDB is an LSM database with a great compression ratio that is optimized for flash storage.
The storage engine must be installed before it can be used.
The MyRocks storage engine's shared library is included in MariaDB packages as the ha_rocksdb.so or ha_rocksdb.dll shared library on systems where it can be built.
The MyRocks storage engine is included in binary tarballs on Linux.
The MyRocks storage engine can also be installed via a package manager on Linux. In order to do so, your system needs to be configured to install from one of the MariaDB repositories.
You can configure your package manager to install it from MariaDB Corporation's MariaDB Package Repository by using the MariaDB Package Repository setup script.
You can also configure your package manager to install it from MariaDB Foundation's MariaDB Repository by using the MariaDB Repository Configuration Tool.
Installing with yum/dnf
On RHEL, CentOS, Fedora, and other similar Linux distributions, it is highly recommended to install the relevant RPM package from MariaDB's
repository using yum or dnf. Starting with RHEL 8 and Fedora 22, yum has been replaced by dnf, which is the next major version of yum. However, yum commands still work on many systems that use dnf:
Installing with apt-get
On Debian, Ubuntu, and other similar Linux distributions, it is highly recommended to install the relevant DEB package from MariaDB's repository using apt-get:
Installing with zypper
On SLES, OpenSUSE, and other similar Linux distributions, it is highly recommended to install the relevant RPM package from MariaDB's repository using zypper:
Once the shared library is in place, the plugin is not actually installed by MariaDB by default. There are two methods that can be used to install the plugin with MariaDB.
The first method can be used to install the plugin without restarting the server. You can install the plugin dynamically by executing INSTALL SONAME or INSTALL PLUGIN:
The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the --plugin-load or the --plugin-load-add options. This can be specified as a command-line argument to mysqld or it can be specified in a relevant server option group in an option file:
Note: When installed with a package manager, an option file that contains the --plugin-load-add option may also be installed. The RPM package installs it as /etc/my.cnf.d/rocksdb.cnf, and the DEB package installs it as /etc/mysql/mariadb.conf.d/rocksdb.cnf
You can uninstall the plugin dynamically by executing UNINSTALL SONAME or UNINSTALL PLUGIN:
If you installed the plugin by providing the --plugin-load or the --plugin-load-add options in a relevant server option group in an option file, then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.
After installing MyRocks you will see RocksDB in the list of plugins:
Supported compression types are listed in the rocksdb_supported_compression_types variable:
See MyRocks and Data Compression for more.
All MyRocks system variables and status variables are prefaced with "rocksdb", so you can query them with, for example:
This page is licensed: CC BY-SA / Gnu FDL
sudo yum install MariaDB-rocksdb-enginesudo apt-get install mariadb-plugin-rocksdbsudo zypper install MariaDB-rocksdb-engineINSTALL SONAME 'ha_rocksdb';[mariadb]
...
plugin_load_add = ha_rocksdbUNINSTALL SONAME 'ha_rocksdb';SHOW PLUGINS;
+-------------------------------+----------+--------------------+---------------+---------+
| Name | Status | Type | Library | License |
+-------------------------------+----------+--------------------+---------------+---------+
...
| ROCKSDB | ACTIVE | STORAGE ENGINE | ha_rocksdb.so | GPL |
| ROCKSDB_CFSTATS | ACTIVE | INFORMATION SCHEMA | ha_rocksdb.so | GPL |
| ROCKSDB_DBSTATS | ACTIVE | INFORMATION SCHEMA | ha_rocksdb.so | GPL |
| ROCKSDB_PERF_CONTEXT | ACTIVE | INFORMATION SCHEMA | ha_rocksdb.so | GPL |
| ROCKSDB_PERF_CONTEXT_GLOBAL | ACTIVE | INFORMATION SCHEMA | ha_rocksdb.so | GPL |
| ROCKSDB_CF_OPTIONS | ACTIVE | INFORMATION SCHEMA | ha_rocksdb.so | GPL |
| ROCKSDB_COMPACTION_STATS | ACTIVE | INFORMATION SCHEMA | ha_rocksdb.so | GPL |
| ROCKSDB_GLOBAL_INFO | ACTIVE | INFORMATION SCHEMA | ha_rocksdb.so | GPL |
| ROCKSDB_DDL | ACTIVE | INFORMATION SCHEMA | ha_rocksdb.so | GPL |
| ROCKSDB_INDEX_FILE_MAP | ACTIVE | INFORMATION SCHEMA | ha_rocksdb.so | GPL |
| ROCKSDB_LOCKS | ACTIVE | INFORMATION SCHEMA | ha_rocksdb.so | GPL |
| ROCKSDB_TRX | ACTIVE | INFORMATION SCHEMA | ha_rocksdb.so | GPL |
...
+-------------------------------+----------+--------------------+---------------+---------+SHOW VARIABLES LIKE 'rocksdb_supported_compression_types';
+-------------------------------------+-------------+
| Variable_name | Value |
+-------------------------------------+-------------+
| rocksdb_supported_compression_types | Snappy,Zlib |
+-------------------------------------+-------------+SHOW VARIABLES LIKE 'rocksdb%';
SHOW STATUS LIKE 'rocksdb%';persistent cache
is tightly integrated into InnoDB storage engine (it also supports Percona's XtraDB which is a modified version of InnoDB). Galera Cluster does not work with any other storage engines, including MyRocks (or TokuDB for example).
Centos/RHEL 7.3 (amd64)
Fedora 24 and 25 (amd64)
OpenSUSE 42 (amd64)
Windows 64 (zip and MSI)

A comprehensive reference for MyRocks system variables, allowing fine-tuning of performance, memory usage, compaction, and other internal behaviors.
This page documents system variables related to the MyRocks storage engine. See Server System Variables for a complete list of system variables and instructions on setting them.
See also the Full list of MariaDB options, system and status variables.
rocksdb_access_hint_on_compaction_startDescription: DBOptions::access_hint_on_compaction_start for RocksDB. Specifies the file access pattern, applied to all input files, once a compaction starts.
Command line: --rocksdb-access-hint-on-compaction-start=#
Scope: Global
Dynamic: No
Data Type: numeric
Default Value: 1
Range: 0 to 3
rocksdb_advise_random_on_openDescription: DBOptions::advise_random_on_open for RocksDB.
Command line: --rocksdb-advise-random-on-open={0|1}
Scope: Global
Dynamic: No
rocksdb_allow_concurrent_memtable_writeDescription: DBOptions::allow_concurrent_memtable_write for RocksDB.
Command line: --rocksdb-allow-concurrent-memtable-write={0|1}
Scope: Global
Dynamic: No
rocksdb_allow_mmap_readsDescription: DBOptions::allow_mmap_reads for RocksDB
Command line: --rocksdb-allow-mmap-reads={0|1}
Scope: Global
Dynamic: No
rocksdb_allow_mmap_writesDescription: DBOptions::allow_mmap_writes for RocksDB
Command line: --rocksdb-allow-mmap-writes={0|1}
Scope: Global
Dynamic: No
rocksdb_allow_to_start_after_corruptionDescription: Allow server still to start successfully even if RocksDB corruption is detected.
Command line: --rocksdb-allow-to-start-after-corruption={0|1}
Scope: Global
Dynamic: No
rocksdb_background_syncDescription: Turns on background syncs for RocksDB
Command line: --rocksdb-background-sync={0|1}
Scope: Global
Dynamic: No
rocksdb_base_background_compactionsDescription: DBOptions::base_background_compactions for RocksDB
Command line: --rocksdb-base-background-compactions=#
Scope: Global
Dynamic: No
rocksdb_blind_delete_primary_keyDescription: Deleting rows by primary key lookup, without reading rows (Blind Deletes). Blind delete is disabled if the table has secondary key.
Command line: --rocksdb-blind-delete-primary-key={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_block_cache_sizeDescription: Block_cache size for RocksDB (block size 1024)
Command line: --rocksdb-block-cache-size=#
Scope: Global
Dynamic: Yes
To see the statistics of block cache usage, check SHOW ENGINE ROCKSDB STATUS output
(search for lines starting with rocksdb.block.cache).
One can check the size of data of the block cache in DB_BLOCK_CACHE_USAGE
column of the INFORMATION_SCHEMA.ROCKSDB_DBSTATS table.
rocksdb_block_restart_intervalDescription: BlockBasedTableOptions::block_restart_interval for RocksDB
Command line: --rocksdb-block-restart-interval=#
Scope: Global
Dynamic: No
rocksdb_block_sizeDescription: BlockBasedTableOptions::block_size for RocksDB
Command line: --rocksdb-block-size=#
Scope: Global
Dynamic: No
rocksdb_block_size_deviationDescription: BlockBasedTableOptions::block_size_deviation for RocksDB
Command line: --rocksdb-block-size-deviation=#
Scope: Global
Dynamic: No
rocksdb_bulk_loadDescription: Use bulk-load mode for inserts. This disables unique_checks and enables rocksdb_commit_in_the_middle.
Command line: --rocksdb-bulk-load={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_bulk_load_allow_skDescription: Allow bulk loading of sk keys during bulk-load. Can be changed only when bulk load is disabled.
Command line: --rocksdb-bulk-load_allow_sk={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_bulk_load_allow_unsortedDescription: Allow unsorted input during bulk-load. Can be changed only when bulk load is disabled.
Command line: --rocksdb-bulk-load_allow_unsorted={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_bulk_load_sizeDescription: Maximum number of records in a batch for bulk-load mode.
Command line: --rocksdb-bulk-load-size=#
Scope: Global, Session
Dynamic: Yes
rocksdb_bytes_per_syncDescription: DBOptions::bytes_per_sync for RocksDB.
Command line: --rocksdb-bytes-per-sync=#
Scope: Global
Dynamic: Yes
rocksdb_cache_dumpDescription: Include RocksDB block cache content in core dump.
Command line: --rocksdb-cache-dump={0|1}
Scope: Global
Dynamic: Yes
rocksdb_cache_high_pri_pool_ratioDescription: Specify the size of block cache high-pri pool.
Command line: --rocksdb-cache-high-pri-pool-ratio=#
Scope: Global
Dynamic: Yes
rocksdb_cache_index_and_filter_blocksDescription: BlockBasedTableOptions::cache_index_and_filter_blocks for RocksDB.
Command line: --rocksdb-cache-index-and-filter-blocks={0|1}
Scope: Global
Dynamic: No
rocksdb_cache_index_and_filter_with_high_priorityDescription: cache_index_and_filter_blocks_with_high_priority for RocksDB.
Command line: --rocksdb-cache-index-and-filter-with-high-priority={0|1}
Scope: Global
Dynamic: No
rocksdb_checksums_pctDescription: Percentage of rows to be checksummed.
Command line: --rocksdb-checksums-pct=#
Scope: Global, Session
Dynamic: Yes
rocksdb_collect_sst_propertiesDescription: Enables collecting SST file properties on each flush.
Command line: --rocksdb-collect-sst-properties={0|1}
Scope: Global
Dynamic: No
rocksdb_commit_in_the_middleDescription: Commit rows implicitly every rocksdb_bulk_load_size, on bulk load/insert, update and delete.
Command line: --rocksdb-commit-in-the-middle={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_commit_time_batch_for_recoveryDescription: TransactionOptions::commit_time_batch_for_recovery for RocksDB.
Command line: --rocksdb-commit-time-batch-for-recovery={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_compact_cfDescription: Compact column family.
Command line: --rocksdb-compact-cf=value
Scope: Global
Dynamic: Yes
rocksdb_compaction_readahead_sizeDescription: DBOptions::compaction_readahead_size for RocksDB.
Command line: --rocksdb-compaction-readahead-size=#
Scope: Global
Dynamic: Yes
rocksdb_compaction_sequential_deletesDescription: RocksDB will trigger compaction for the file if it has more than this number sequential deletes per window.
Command line: --rocksdb-compaction-sequential-deletes=#
Scope: Global
Dynamic: Yes
rocksdb_compaction_sequential_deletes_count_sdDescription: Counting SingleDelete as rocksdb_compaction_sequential_deletes.
Command line: --rocksdb-compaction-sequential-deletes-count-sd={0|1}
Scope: Global
Dynamic: Yes
rocksdb_compaction_sequential_deletes_file_sizeDescription: Minimum file size required for compaction_sequential_deletes.
Command line: --rocksdb-compaction-sequential-deletes-file-size=#
Scope: Global
Dynamic: Yes
rocksdb_compaction_sequential_deletes_windowDescription: Size of the window for counting rocksdb_compaction_sequential_deletes.
Command line: --rocksdb-compaction-sequential-deletes-window=#
Scope: Global
Dynamic: Yes
rocksdb_concurrent_prepareDescription: DBOptions::concurrent_prepare for RocksDB.
Command line: --rocksdb-coconcurrent-prepare={0|1}
Scope: Global
Dynamic: No
rocksdb_create_checkpointDescription: Checkpoint directory.
Command line: --rocksdb-create-checkpoint=value
Scope: Global
Dynamic: Yes
rocksdb_create_if_missingDescription: DBOptions::create_if_missing for RocksDB.
Command line: --rocksdb-create-if-missing={0|1}
Scope: Global
Dynamic: No
rocksdb_create_missing_column_familiesDescription: DBOptions::create_missing_column_families for RocksDB.
Command line: --rocksdb-create-missing-column-families={0|1}
Scope: Global
Dynamic: No
rocksdb_datadirDescription: RocksDB data directory.
Command line: --rocksdb-datadir[=value]
Scope: Global
Dynamic: No
rocksdb_db_write_buffer_sizeDescription: DBOptions::db_write_buffer_size for RocksDB.
Command line: --rocksdb-db-write-buffer-size=#
Scope: Global
Dynamic: No
rocksdb_deadlock_detectDescription: Enables deadlock detection.
Command line: --rocksdb-deadlock-detect={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_deadlock_detect_depthDescription: Number of transactions deadlock detection will traverse through before assuming deadlock.
Command line: --rocksdb-deadlock-detect-depth=#
Scope: Global, Session
Dynamic: Yes
rocksdb_debug_manual_compaction_delayDescription: For debugging purposes only. Sleeping specified seconds for simulating long running compactions.
Command line: --rocksdb-debug_manual_compaction_delay=#
Scope: Global
Dynamic: Yes
rocksdb_debug_optimizer_no_zero_cardinalityDescription: If cardinality is zero, override it with some value.
Command line: --rocksdb-debug-optimizer-no-zero-cardinality={0|1}
Scope: Global
Dynamic: Yes
rocksdb_debug_ttl_ignore_pkDescription: For debugging purposes only. If true, compaction filtering will not occur on PK TTL data. This variable is a no-op in non-debug builds.
Command line: --rocksdb-debug-ttl-ignore-pk={0|1}
Scope: Global
Dynamic: Yes
rocksdb_debug_ttl_read_filter_tsDescription: For debugging purposes only. Overrides the TTL read filtering time to time + debug_ttl_read_filter_ts. A value of 0 denotes that the variable is not set. This variable is a no-op in non-debug builds.
Command line: --rocksdb-debug-ttl-read-filter-ts=#
Scope: Global
Dynamic: Yes
rocksdb_debug_ttl_rec_tsDescription: For debugging purposes only. Overrides the TTL of records to now() + debug_ttl_rec_ts. The value can be +/- to simulate a record inserted in the past vs a record inserted in the 'future'. A value of 0 denotes that the variable is not set. This variable is a no-op in non-debug builds.
Command line: --rocksdb-debug-ttl-read-filter-ts=#
Scope: Global
Dynamic: Yes
rocksdb_debug_ttl_snapshot_tsDescription: For debugging purposes only. Sets the snapshot during compaction to now() + debug_set_ttl_snapshot_ts. The value can be positive or negative to simulate a snapshot in the past vs a snapshot created in the 'future'. A value of 0 denotes that the variable is not set. This variable is a no-op in non-debug builds.
Command line: --rocksdb-debug-ttl-snapshot-ts=#
Scope: Global
Dynamic: Yes
rocksdb_default_cf_optionsDescription: Default cf options for RocksDB.
Command line: --rocksdb-default-cf-options=value
Scope: Global
Dynamic: No
rocksdb_delayed_write_rateDescription: DBOptions::delayed_write_rate.
Command line: --rocksdb-delayed-write-rate=#
Scope: Global
Dynamic: Yes
rocksdb_delete_cfDescription: Delete column family.
Command line: --rocksdb-delete-cf=val
Scope: Global
Dynamic: No
rocksdb_delete_obsolete_files_period_microsDescription: DBOptions::delete_obsolete_files_period_micros for RocksDB.
Command line: --rocksdb-delete-obsolete-files-period-micros=#
Scope: Global
Dynamic: No
rocksdb_enable_2pcDescription: Enable two phase commit for MyRocks. When set, MyRocks will keep its data consistent with the (in other words, the server are a crash-safe master). The consistency is achieved by doing two-phase XA commit with the binary log.
Command line: --rocksdb-enable-2pc={0|1}
Scope: Global
Dynamic: Yes
rocksdb_enable_bulk_load_apiDescription: Enables using SstFileWriter for bulk loading.
Command line: --rocksdb-enable-bulk-load-api={0|1}
Scope: Global
Dynamic: No
rocksdb_enable_insert_with_update_cachingDescription: Whether to enable optimization where we cache the read from a failed insertion attempt in .
Command line: --rocksdb-enable-insert-with-update-caching={0|1}
Scope: Global
Dynamic: Yes
rocksdb_enable_thread_trackingDescription: DBOptions::enable_thread_tracking for RocksDB.
Command line: --rocksdb-enable-thread-tracking={0|1}
Scope: Global
Dynamic: No
rocksdb_enable_ttlDescription: Enable expired TTL records to be dropped during compaction.
Command line: --rocksdb-enable-ttl={0|1}
Scope: Global
Dynamic: Yes
rocksdb_enable_ttl_read_filteringDescription: For tables with TTL, expired records are skipped/filtered out during processing and in query results. Disabling this will allow these records to be seen, but as a result rows may disappear in the middle of transactions as they are dropped during compaction. Use with caution.
Command line: --rocksdb-enable-ttl-read-filtering={0|1}
Scope: Global
Dynamic: Yes
rocksdb_enable_write_thread_adaptive_yieldDescription: DBOptions::enable_write_thread_adaptive_yield for RocksDB.
Command line: --rocksdb-enable-write-thread-adaptive-yield={0|1}
Scope: Global
Dynamic: No
rocksdb_error_if_existsDescription: DBOptions::error_if_exists for RocksDBB.
Command line: --rocksdb-error-if-exists={0|1}
Scope: Global
Dynamic: No
rocksdb_error_on_suboptimal_collationDescription: Raise an error instead of warning if a sub-optimal collation is used.
Command line: --rocksdb-error-on-suboptimal-collation={0|1}
Scope: Global
Dynamic: No
rocksdb_flush_log_at_trx_commitDescription: Sync on transaction commit. Similar to . One can check the flushing by examining the and status variables.
1: Always sync on commit (the default).
0: Never sync.
2: Sync based on a timer controlled via rocksdb-background-sync.
rocksdb_flush_memtable_on_analyzeDescription: Forces memtable flush on ANALZYE table to get accurate cardinality.
Command line: --rocksdb-flush-memtable-on-analyze={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_force_compute_memtable_statsDescription: Force to always compute memtable stats.
Command line: --rocksdb-force-compute-memtable-stats={0|1}
Scope: Global
Dynamic: Yes
rocksdb_force_compute_memtable_stats_cachetimeDescription: Time in usecs to cache memtable estimates.
Command line: --rocksdb-force-compute-memtable-stats-cachetime=#
Scope: Global
Dynamic: Yes
rocksdb_force_flush_memtable_and_lzero_nowDescription: Acts similar to force_flush_memtable_now, but also compacts all L0 files.
Command line: --rocksdb-force-flush-memtable-and-lzero-now={0|1}
Scope: Global
Dynamic: Yes
rocksdb_force_flush_memtable_nowDescription: Forces memstore flush which may block all write requests so be careful.
Command line: --rocksdb-force-flush-memtable-now={0|1}
Scope: Global
Dynamic: Yes
rocksdb_force_index_records_in_rangeDescription: Used to override the result of records_in_range() when is used.
Command line: --rocksdb-force-index-records-in-range=#
Scope: Global, Session
Dynamic: Yes
rocksdb_git_hashDescription: Git revision of the RocksDB library used by MyRocks.
Command line: --rocksdb-git-hash=value=#
Scope: Global
Dynamic: No
rocksdb_hash_index_allow_collisionDescription: BlockBasedTableOptions::hash_index_allow_collision for RocksDB.
Command line: --rocksdb-hash-index-allow-collision={0|1}
Scope: Global
Dynamic: No
rocksdb_ignore_unknown_optionsDescription: Enable ignoring unknown options passed to RocksDB.
Command line: --rocksdb-ignore-unknown-options={0|1}
Scope: Global
Dynamic: No
rocksdb_index_typeDescription: BlockBasedTableOptions::index_type for RocksDB.
Command line: --rocksdb-index-type=value
Scope: Global
Dynamic: No
rocksdb_info_log_levelDescription: Filter level for info logs to be written mysqld error log. Valid values include 'debug_level', 'info_level', 'warn_level', 'error_level' and 'fatal_level'.
Command line: --rocksdb-info-log-level=value
Scope: Global
Dynamic: Yes
rocksdb_io_write_timeoutDescription: Timeout for experimental I/O watchdog.
Command line: --rocksdb-io-write-timeout=#
Scope: Global
Dynamic: Yes
rocksdb_is_fd_close_on_execDescription: DBOptions::is_fd_close_on_exec for RocksDB.
Command line: --rocksdb-is-fd-close-on-exec={0|1}
Scope: Global
Dynamic: No
rocksdb_keep_log_file_numDescription: DBOptions::keep_log_file_num for RocksDB.
Command line: --rocksdb-keep-log-file-num=#
Scope: Global
Dynamic: No
rocksdb_large_prefixDescription: Support large index prefix length of 3072 bytes. If off, the maximum index prefix length is 767.
Command line: --rocksdb-large_prefix={0|1}
Scope: Global
Dynamic: Yes
rocksdb_lock_scanned_rowsDescription: Take and hold locks on rows that are scanned but not updated.
Command line: --rocksdb-lock-scanned-rows={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_lock_wait_timeoutDescription: Number of seconds to wait for lock.
Command line: --rocksdb-lock-wait-timeout=#
Scope: Global, Session
Dynamic: Yes
rocksdb_log_dirDescription: DBOptions::log_dir for RocksDB. Where the log files are stored. An empty value implies rocksdb_datadir is used as the directory.
Command line: --rocksdb-log-dir=#
Scope: Global
Dynamic: No
rocksdb_log_file_time_to_rollDescription: DBOptions::log_file_time_to_roll for RocksDB.
Command line: --rocksdb-log-file-time-to_roll=#
Scope: Global
Dynamic: No
rocksdb_manifest_preallocation_sizeDescription: DBOptions::manifest_preallocation_size for RocksDB.
Command line: --rocksdb-manifest-preallocation-size=#
Scope: Global
Dynamic: No
rocksdb_manual_compaction_threadsDescription: How many rocksdb threads to run for manual compactions.
Command line: --rocksdb-manual-compation-threads=#
Scope: Global, Session
Dynamic: Yes
rocksdb_manual_wal_flushDescription: DBOptions::manual_wal_flush for RocksDB.
Command line: --rocksdb-manual-wal-flush={0|1}
Scope: Global
Dynamic: No
rocksdb_master_skip_tx_apiDescription: Skipping holding any lock on row access. Not effective on slave.
Command line: --rocksdb-master-skip-tx-api={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_max_background_compactionsDescription: DBOptions::max_background_compactions for RocksDB.
Command line: --rocksdb-max-background-compactions=#
Scope: Global
Dynamic: Yes
rocksdb_max_background_flushesDescription: DBOptions::max_background_flushes for RocksDB.
Command line: --rocksdb-max-background-flushes=#
Scope: Global
Dynamic: No
rocksdb_max_background_jobsDescription: DBOptions::max_background_jobs for RocksDB.
Command line: --rocksdb-max-background-jobs=#
Scope: Global
Dynamic: Yes
rocksdb_max_latest_deadlocksDescription: Maximum number of recent deadlocks to store.
Command line: --rocksdb-max-latest-deadlocks=#
Scope: Global
Dynamic: Yes
rocksdb_max_log_file_sizeDescription: DBOptions::max_log_file_size for RocksDB.
Command line: --rocksdb-max-log-file-size=#
Scope: Global
Dynamic: No
rocksdb_max_manifest_file_sizeDescription: DBOptions::max_manifest_file_size for RocksDB.
Command line: --rocksdb-manifest-log-file-size=#
Scope: Global
Dynamic: No
rocksdb_max_manual_compactionsDescription: Maximum number of pending + ongoing number of manual compactions..
Command line: --rocksdb-manual_compactions=#
Scope: Global
Dynamic: Yes
rocksdb_max_open_filesDescription: DBOptions::max_open_files for RocksDB.
Command line: --rocksdb-max-open-files=#
Scope: Global
Dynamic: No
rocksdb_max_row_locksDescription: Maximum number of locks a transaction can have.
Command line: --rocksdb-max-row-locks=#
Scope: Global, Session
Dynamic: Yes
rocksdb_max_subcompactionsDescription: DBOptions::max_subcompactions for RocksDB.
Command line: --rocksdb-max-subcompactions=#
Scope: Global
Dynamic: No
rocksdb_max_total_wal_sizeDescription: DBOptions::max_total_wal_size for RocksDB. The maximum size limit for write-ahead-log files. Once this limit is reached, RocksDB forces the flushing of memtables.
Command line: --rocksdb-max-total-wal-size=#
Scope: Global
Dynamic: No
rocksdb_merge_buf_sizeDescription: Size to allocate for merge sort buffers written out to disk during inplace index creation.
Command line: --rocksdb-merge-buf-size=#
Scope: Global, Session
Dynamic: Yes
rocksdb_merge_combine_read_sizeDescription: Size that we have to work with during combine (reading from disk) phase of external sort during fast index creation.
Command line: --rocksdb-merge-combine-read-size=#
Scope: Global, Session
Dynamic: Yes
rocksdb_merge_tmp_file_removal_delay_msDescription: Fast index creation creates a large tmp file on disk during index creation. Removing this large file all at once when index creation is complete can cause trim stalls on Flash. This variable specifies a duration to sleep (in milliseconds) between calling chsize() to truncate the file in chunks. The chunk size is the same as merge_buf_size.
Command line: --rocksdb-merge-tmp-file-removal-delay-ms=#
Scope: Global, Session
rocksdb_new_table_reader_for_compaction_inputsDescription: DBOptions::new_table_reader_for_compaction_inputs for RocksDB.
Command line: --rocksdb-new-table-reader-for-compaction-inputs={0|1}
Scope: Global
Dynamic: No
rocksdb_no_block_cacheDescription: BlockBasedTableOptions::no_block_cache for RocksDB.
Command line: --rocksdb-no-block-cache={0|1}
Scope: Global
Dynamic: No
rocksdb_override_cf_optionsDescription: Option overrides per cf for RocksDB. Note that the rocksdb-override-cf-options syntax is quite strict, and any typos will result in a parse error, and the MyRocks plugin will not be loaded. Depending on your configuration, the server may still start. If it does start, you can use this command to check if the plugin is loaded: select * from information_schema.plugins where plugin_name='ROCKSDB' (note that you need the "ROCKSDB" plugin. Other auxiliary plugins like "ROCKSDB_TRX" might still get loaded). Another way is to detect the error is check the error log.
Command line: --rocksdb-override-cf-options=value
Scope: Global
rocksdb_paranoid_checksDescription: DBOptions::paranoid_checks for RocksDB.
Command line: --rocksdb-paranoid-checks={0|1}
Scope: Global
Dynamic: No
rocksdb_pause_background_workDescription: Disable all rocksdb background operations.
Command line: --rocksdb-pause-background-work={0|1}
Scope: Global
Dynamic: Yes
rocksdb_perf_context_levelDescription: Perf Context Level for rocksdb internal timer stat collection.
Command line: --rocksdb-perf-context-level=#
Scope: Global, Session
Dynamic: Yes
rocksdb_persistent_cache_pathDescription: Path for BlockBasedTableOptions::persistent_cache for RocksDB.
Command line: --rocksdb-persistent-cache-path=value
Scope: Global
Dynamic: No
rocksdb_persistent_cache_size_mbDescription: Size of cache in MB for BlockBasedTableOptions::persistent_cache for RocksDB.
Command line: --rocksdb-persistent-cache-size-mb=#
Scope: Global
Dynamic: No
rocksdb_pin_l0_filter_and_index_blocks_in_cacheDescription: pin_l0_filter_and_index_blocks_in_cache for RocksDB.
Command line: --rocksdb-pin-l0-filter-and-index-blocks-in-cache={0|1}
Scope: Global
Dynamic: No
rocksdb_print_snapshot_conflict_queriesDescription: Logging queries that got snapshot conflict errors into *.err log.
Command line: --rocksdb-print-snapshot-conflict-queries={0|1}
Scope: Global
Dynamic: Yes
rocksdb_rate_limiter_bytes_per_secDescription: DBOptions::rate_limiter bytes_per_sec for RocksDB.
Command line: --rocksdb-rate-limiter-bytes-per-sec=#
Scope: Global
Dynamic: Yes
rocksdb_read_free_rpl_tablesDescription: List of tables that will use read-free replication on the slave (i.e. not lookup a row during replication).
Command line: --rocksdb-read-free-rpl-tables=value
Scope: Global, Session
Dynamic: Yes
rocksdb_records_in_rangeDescription: Used to override the result of records_in_range(). Set to a positive number to override.
Command line: --rocksdb-records-in-range=#
Scope: Global, Session
Dynamic: Yes
rocksdb_remove_mariadb-backup_checkpointDescription: Remove checkpoint.
Command line: --rocksdb-remove-mariadb-backup-checkpoint={0|1}
Scope: Global
Dynamic: Yes
rocksdb_reset_statsDescription: Reset the RocksDB internal statistics without restarting the DB.
Command line: --rocksdb-reset-stats={0|1}
Scope: Global
Dynamic: Yes
rocksdb_rollback_on_timeoutDescription: Whether to roll back the complete transaction or a single statement on lock wait timeout (a single statement by default).
Command line: --rocksdb-rollback-on-timeout={0|1}
Scope: Global
Dynamic: Yes
rocksdb_seconds_between_stat_computesDescription: Sets a number of seconds to wait between optimizer stats recomputation. Only changed indexes are refreshed.
Command line: --rocksdb-seconds-between-stat-computes=#
Scope: Global
Dynamic: Yes
rocksdb_signal_drop_index_threadDescription: Wake up drop index thread.
Command line: --rocksdb-signal-drop-index-thread={0|1}
Scope: Global
Dynamic: Yes
rocksdb_sim_cache_sizeDescription: Simulated cache size for RocksDB.
Command line: --rocksdb-sim-cache-size=#
Scope: Global
Dynamic: No
rocksdb_skip_bloom_filter_on_readDescription: Skip using bloom filter for reads.
Command line: --rocksdb-skip-bloom-filter-on_read={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_skip_fill_cacheDescription: Skip filling block cache on read requests.
Command line: --rocksdb-skip-fill-cache={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_skip_unique_check_tablesDescription: Skip unique constraint checking for the specified tables.
Command line: --rocksdb-skip-unique-check-tables=value
Scope: Global, Session
Dynamic: Yes
rocksdb_sst_mgr_rate_bytes_per_secDescription: DBOptions::sst_file_manager rate_bytes_per_sec for RocksDB
Command line: --rocksdb-sst-mgr-rate-bytes-per-sec=#
Scope: Global
Dynamic: Yes
rocksdb_stats_dump_period_secDescription: DBOptions::stats_dump_period_sec for RocksDB.
Command line: --rocksdb-stats-dump-period-sec=#
Scope: Global
Dynamic: No
rocksdb_stats_levelDescription: Statistics Level for RocksDB. Default is 0 (kExceptHistogramOrTimers).
Command line: --rocksdb-stats-level=#
Scope: Global
Dynamic: Yes
rocksdb_stats_recalc_rateDescription: The number of indexes per second to recalculate statistics for. 0 to disable background recalculation.
Command line: --rocksdb-stats-recalc_rate=#
Scope: Global
Dynamic: Yes
rocksdb_store_row_debug_checksumsDescription: Include checksums when writing index/table records.
Command line: --rocksdb-store-row-debug-checksums={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_strict_collation_checkDescription: Enforce case sensitive collation for MyRocks indexes.
Command line: --rocksdb-strict-collation-check={0|1}
Scope: Global
Dynamic: Yes
rocksdb_strict_collation_exceptionsDescription: List of tables (using regex) that are excluded from the case sensitive collation enforcement.
Command line: --rocksdb-strict-collation-exceptions=value
Scope: Global
Dynamic: Yes
rocksdb_supported_compression_typesDescription: Compression algorithms supported by RocksDB. Note that RocksDB does not make use of .
Command line: --rocksdb-supported-compression-types=value
Scope: Global
Dynamic: No
rocksdb_table_cache_numshardbitsDescription: DBOptions::table_cache_numshardbits for RocksDB.
Command line: --rocksdb-table-cache-numshardbits=#
Scope: Global
Dynamic: No
rocksdb_table_stats_sampling_pctDescription: Percentage of entries to sample when collecting statistics about table properties. Specify either 0 to sample everything or percentage [1..100]. By default 10% of entries are sampled.
Command line: --rocksdb-table-stats-sampling-pct=#
Scope: Global
Dynamic: Yes
rocksdb_tmpdirDescription: Directory for temporary files during DDL operations.
Command line: --rocksdb-tmpdir[=value]
Scope: Global, Session
Dynamic: Yes
rocksdb_trace_sst_apiDescription: Generate trace output in the log for each call to the SstFileWriter.
Command line: --rocksdb-trace-sst-api={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_two_write_queuesDescription: DBOptions::two_write_queues for RocksDB.
Command line: --rocksdb-two-write-queues={0|1}
Scope: Global,
Dynamic: No
rocksdb_unsafe_for_binlogDescription: Allowing statement based binary logging which may break consistency.
Command line: --rocksdb-unsafe-for-binlog={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_update_cf_optionsDescription: Option updates per column family for RocksDB.
Command line: --rocksdb-update-cf-options=value
Scope: Global
Dynamic: Yes
rocksdb_use_adaptive_mutexDescription: DBOptions::use_adaptive_mutex for RocksDB.
Command line: --rocksdb-use-adaptive-mutex={0|1}
Scope: Global
Dynamic: No
rocksdb_use_clock_cacheDescription: Use ClockCache instead of default LRUCache for RocksDB.
Command line: --rocksdb-use-clock-cache={0|1}
Scope: Global
Dynamic: No
rocksdb_use_direct_io_for_flush_and_compactionDescription: DBOptions::use_direct_io_for_flush_and_compaction for RocksDB.
Command line: --rocksdb-use-direct-io-for-flush-and-compaction={0|1}
Scope: Global
Dynamic: No
rocksdb_use_direct_readsDescription: DBOptions::use_direct_reads for RocksDB.
Command line: --rocksdb-use-direct-reads={0|1}
Scope: Global
Dynamic: No
rocksdb_use_direct_writesDescription: DBOptions::use_direct_writes for RocksDB.
Command line: --rocksdb-use-direct-reads={0|1}
Scope: Global
Dynamic: No
rocksdb_use_fsyncDescription: DBOptions::use_fsync for RocksDB.
Command line: --rocksdb-use-fsync={0|1}
Scope: Global
Dynamic: No
rocksdb_validate_tablesDescription: Verify all .frm files match all RocksDB tables (0 means no verification, 1 means verify and fail on error, and 2 means verify but continue.
Command line: --rocksdb-validate-tables=#
Scope: Global
Dynamic: No
rocksdb_verify_row_debug_checksumsDescription: Verify checksums when reading index/table records.
Command line: --rocksdb-verify-row-debug-checksums={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_wal_bytes_per_syncDescription: DBOptions::wal_bytes_per_sync for RocksDB.
Command line: --rocksdb-wal-bytes-per-sync=#
Scope: Global
Dynamic: Yes
rocksdb_wal_dirDescription: DBOptions::wal_dir for RocksDB. Directory where the write-ahead-log files are stored.
Command line: --rocksdb-wal-dir=value
Scope: Global
Dynamic: No
rocksdb_wal_recovery_modeDescription: DBOptions::wal_recovery_mode for RocksDB. Default is kAbsoluteConsistency. Records that are not yet committed are stored in the Write-Ahead-Log (WAL). If the server is not cleanly shut down, the recovery mode will determine the WAL recovery behavior.
1: kAbsoluteConsistency. Will not start if any corrupted entries (including incomplete writes) are detected (the default).
0: kTolerateCorruptedTailRecords. Ignores any errors found at the end of the WAL
rocksdb_wal_size_limit_mbDescription: DBOptions::WAL_size_limit_MB for RocksDB. Write-ahead-log files are moved to an archive directory once their memtables are flushed. This variable specifies the largest size, in MB, that the archive can be.
Command line: --rocksdb-wal-size-limit-mb=#
Scope: Global
Dynamic: No
rocksdb_wal_ttl_secondsDescription: DBOptions::WAL_ttl_seconds for RocksDB. Oldest time, in seconds, that a write-ahead-log file should exist.
Command line: --rocksdb-wal-ttl-seconds=#
Scope: Global
Dynamic: No
rocksdb_whole_key_filteringDescription: BlockBasedTableOptions::whole_key_filtering for RocksDB. If set (the default), the bloomfilter to use the whole key (rather than only the prefix) for filtering is enabled. Lookups should use the whole key for matching to make best use of this setting.
Command line: --rocksdb-whole-key-filtering={0|1}
Scope: Global
Dynamic: No
rocksdb_write_batch_max_bytesDescription: Maximum size of write batch in bytes. 0 means no limit.
Command line: --rocksdb-write-batch-max-bytes=#
Scope: Global, Session
Dynamic: Yes
rocksdb_write_disable_walDescription: WriteOptions::disableWAL for RocksDB.
Command line: --rocksdb-write-disable-wal={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_write_ignore_missing_column_familiesDescription: WriteOptions::ignore_missing_column_families for RocksDB.
Command line: --rocksdb-write-ignore-missing-column-families={0|1}
Scope: Global, Session
Dynamic: Yes
rocksdb_write_policyDescription: DBOptions::write_policy for RocksDB.
Command line: --rocksdb-write-policy=val
Scope: Global
Dynamic: No
This page is licensed: CC BY-SA / Gnu FDL
Data Type: boolean
Default Value: ON
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Removed: ,
Data Type: numeric
Default Value: 1
Range: -1 to 64
Removed: ,
Data Type: boolean
Default Value: OFF
Data Type: numeric
Default Value: 536870912
Range: 1024 to 9223372036854775807
Data Type: numeric
Default Value: 16
Range: 1 to 2147483647
Data Type: numeric
Default Value: 4096
Range: 1 to 18446744073709551615
Data Type: numeric
Default Value: 10
Range: 0 to 2147483647
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Data Type: numeric
Default Value: 1000
Range: 1 to 1073741824
Data Type: numeric
Default Value: 0
Range: 0 to 18446744073709551615
Data Type: boolean
Default Value: ON
Data Type: double
Default Value: 0.000000
Range: 0 to 1
Data Type: boolean
Default Value: ON
Data Type: boolean
Default Value: ON
Data Type: numeric
Default Value: 100
Range: 0 to 100
Data Type: boolean
Default Value: ON
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Data Type: string
Default Value: (Empty)
Data Type: numeric
Default Value: 0
Range: 0 to 18446744073709551615
Data Type: numeric
Default Value: 0
Range: 0 to 2000000
Data Type: boolean
Default Value: OFF
Data Type: numeric
Default Value: 0
Range: -1 to 9223372036854775807
Data Type: numeric
Default Value: 0
Range: 0 to 2000000
Data Type: boolean
Default Value: 1
Removed: ,
Data Type: string
Default Value: (Empty)
Data Type: boolean
Default Value: ON
Data Type: boolean
Default Value: OFF
Data Type: string
Default Value: ./#rocksdb
Data Type: numeric
Default Value: 0
Range: 0 to 18446744073709551615
Data Type: boolean
Default Value: OFF
Data Type: numeric
Default Value: 50
Range: 2 to 18446744073709551615
Data Type: numeric
Default Value: 0
Range: 0 to 4294967295
Data Type: boolean
Default Value: ON
Data Type: boolean
Default Value: OFF
Data Type: numeric
Default Value: 0
Range: -3600 to 3600
Data Type: numeric
Default Value: 0
Range: -3600 to 3600
Data Type: numeric
Default Value: 0
Range: -3600 to 3600
Introduced: ,
Data Type: string
Default Value: (Empty)
Data Type: numeric
Default Value: 0 (Previously 16777216)
Range: 0 to 18446744073709551615
Data Type: string
Default Value: (Empty string)
Introduced: , ,
Data Type: numeric
Default Value: 21600000000
Range: 0 to 9223372036854775807
Data Type: boolean
Default Value: ON
Data Type: boolean
Default Value: ON
Data Type: boolean
Default Value: ON
Introduced: , ,
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: ON
Introduced: ,
Data Type: boolean
Default Value: ON
Introduced: ,
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: ON
Introduced: ,
Command line: --rocksdb-flush-log-at-trx-commit=#
Scope: Global
Dynamic: Yes
Data Type: numeric
Default Value: 1
Range: 0 to 2
Data Type: boolean
Default Value: ON
Removed: ,
Data Type: boolean
Default Value: ON
Data Type: numeric
Default Value: 60000000
Range: 0 to 2147483647
Data Type: boolean
Default Value: OFF
Introduced: ,
Data Type: boolean
Default Value: OFF
Data Type: numeric
Default Value: 0
Range: 0 to 2147483647
Data Type: string
Default Value: As per git revision.
Data Type: boolean
Default Value: ON
Data Type: boolean
Default Value: ON
Introduced: ,
Data Type: enum
Default Value: kBinarySearch
Valid Values: kBinarySearch, kHashSearch
Data Type: enum
Default Value: error_level
Valid Values: error_level, debug_level, info_level, warn_level, fatal_level
Data Type: numeric
Default Value: 0
Valid Values: 0 to 4294967295
Introduced: ,
Data Type: boolean
Default Value: ON
Data Type: numeric
Default Value: 1000
Range: 0 to 18446744073709551615
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Data Type: numeric
Default Value: 1
Range: 1 to 1073741824
Data Type: string
Default Value: (Empty)
Introduced:
Data Type: numeric
Default Value: 0
Range: 0 to 18446744073709551615
Data Type: numeric
Default Value: 4194304
Range: 0 to 18446744073709551615
Data Type: numeric
Default Value: 0
Range: 0 to 128
Introduced: ,
Data Type: boolean
Default Value: ON
Data Type: boolean
Default Value: OFF
Data Type: numeric
Default Value: 1
Range: 1 to 64
Removed: ,
Data Type: numeric
Default Value: 1
Range: 1 to 64
Removed: ,
Data Type: numeric
Default Value: 2
Range: -1 to 64
Introduced: ,
Data Type: numeric
Default Value: 5
Range: 0 to 4294967295
Data Type: numeric
Default Value: 0
Range: 0 to 18446744073709551615
Data Type: numeric
Default Value: 1073741824
Range: 0 to 18446744073709551615
Data Type: numeric
Default Value: 10
Range: 0 to 4294967295
Introduced: ,
Data Type: numeric
Default Value: -2
Range: -2 to 2147483647
Data Type: numeric
Default Value: 1048576
Range:
1 to 1073741824 (>= , )
1 to 1048576 (<= , )
Data Type: numeric
Default Value: 1
Range: 1 to 64
Data Type: numeric
Default Value: 0
Range: 0 to 9223372036854775807
Data Type: numeric
Default Value: 67108864
Range: 100 to 18446744073709551615
Data Type: numeric
Default Value: 1073741824
Range: 100 to 18446744073709551615
Data Type: numeric
Default Value: 0
Range: 0 to 18446744073709551615
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Dynamic: No
Data Type: string
Default Value: (Empty)
Data Type: boolean
Default Value: ON
Data Type: boolean
Default Value: OFF
Data Type: numeric
Default Value: 0
Range: 0 to 5
Data Type: string
Default Value: (Empty)
Data Type: numeric
Default Value: 0
Range: 0 to 18446744073709551615
Data Type: boolean
Default Value: ON
Data Type: boolean
Default Value: OFF
Data Type: numeric
Default Value: 0
Range: 0 to 9223372036854775807
Data Type: string
Default Value: (Empty)
Removed: , ,
Data Type: numeric
Default Value: 0
Range: 0 to 2147483647
Data Type: boolean
Default Value: OFF
Introduced: ,
Data Type: boolean
Default Value: OFF
Introduced: ,
Data Type: boolean
Default Value: OFF
Introduced: , ,
Data Type: numeric
Default Value: 3600
Range: 0 to 4294967295
Data Type: boolean
Default Value: OFF
Data Type: numeric
Default Value: 0
Range: 0 to 9223372036854775807
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Data Type: string
Default Value: .*
Data Type: numeric
Default Value: 0
Range: 0 to 18446744073709551615
Data Type: numeric
Default Value: 600
Range: 0 to 2147483647
Data Type: numeric
Default Value: 0
Range: 0 to 4
Introduced: , ,
Data Type: numeric
Default Value: 0
Range: 0 to 4294967295
Introduced:
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: ON
Data Type: string
Default Value: (Empty)
Data Type: string
Default Value: Snappy,Zlib,ZSTDNotFinal
Data Type: numeric
Default Value: 6
Range: 0 to 19
Data Type: numeric
Default Value: 10
Range: 0 to 100
Data Type: string
Default Value: (Empty)
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: ON
Introduced: ,
Data Type: boolean
Default Value: OFF
Data Type: varchar
Default Value: (Empty)
Introduced: ,
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Introduced: ,
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Removed: ,
Data Type: boolean
Default Value: OFF
Data Type: numeric
Default Value: 1
Range: 0 to 2
Data Type: boolean
Default Value: OFF
Data Type: numeric
Default Value: 0
Range: 0 to 18446744073709551615
Data Type: string
Default Value: (Empty)
3: kSkipAnyCorruptedRecords. A risky option where any corrupted entries are skipped while subsequent healthy WAL entries are applied.
Command line: --rocksdb-wal-recovery-mode=#
Scope: Global
Dynamic: Yes
Data Type: numeric
Default Value: 1
Range: 0 to 3
Data Type: numeric
Default Value: 0
Range: 0 to 9223372036854775807
Data Type: numeric
Default Value: 0
Range: 0 to 9223372036854775807
Data Type: boolean
Default Value: ON
Data Type: numeric
Default Value: 0
Range: 0 to 18446744073709551615
Data Type: boolean
Default Value: OFF
Data Type: boolean
Default Value: OFF
Data Type: enum
Default Value: write_committed
Valid Values: write_committed, write_prepared, write_unprepared
Introduced: ,