Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Explore legacy storage engines in MariaDB Server. This section provides information on older engines, their historical context, and considerations for migration or compatibility.
Legacy FEDERATED storage engine description.
The FEDERATED Storage Engine is a legacy storage engine no longer being supported. A fork, FederatedX is being actively maintained. Since , the CONNECT storage engine also permits accessing a remote database via MySQL or ODBC connection (table types: MYSQL, ODBC).
The FEDERATED Storage Engine was originally designed to let one access data remotely without using clustering or replication, and perform local queries that automatically access the remote data.
This page is licensed: CC BY-SA / Gnu FDL
Legacy Cassandra storage engine description. Cassandra was removed from MariaDB in MariaDB 10.6.
A storage engine interface to Cassandra. Read the original announcement information about CassandraSE.
The TokuDB storage engine has been removed from MariaDB.
The TokuDB storage engine has been removed from MariaDB.
The TokuDB storage engine is for use in high-performance and write-intensive environments, offering increased compression and better performance.
It is available in an open-source version, included with 64-bit MariaDB (but not enabled by default), and an Enterprise edition available from Tokutek.
Note that the default value of will prevent row-based replication from working. To use RBR, change the value of this variable.
The version of TokuDB in your local MariaDB is available by querying the status variable:
In the MariaDB binary tarballs, only the ones labeled "glibc_214" have TokuDB.
with this version, TokuDB now follows the version numbering of Percona XtraDB
SHOW VARIABLES LIKE 'tokudb_version';Legacy Cassandra storage engine description. Cassandra was removed from MariaDB in MariaDB 10.6.
This page describes how to build the Cassandra Storage Engine.
The code is in bazaar branch at lp:~maria-captains/maria/5.5-cassandra.
Alternatively, you can download a tarball from
The build process is not fully streamlined yet. It is
known to work on Fedora 15 and OpenSUSE
known not to work on Ubuntu Oneiric Ocelot (see ).
known to work on Ubuntu Precise Pangolin
The build process is as follows
Install Cassandra (we tried 1.1.3 ... 1.1.5, 1.2 beta versions should work but haven't been tested)
Install the Thrift library (we used 0.8.0 and ), only the C++ backend is needed.
we have installed it by compiling the source tarball downloaded from
edit storage/cassandra/CMakeLists.txt
Cassandra storage engine is linked into the server (ie, it is not a plugin). All you need to do is to make sure Thrift's libthrift.so can be found by the loader. This may require adjusting the LD_LIBRARY_PATH variable.
There is a basic testsuite. In order to run it, one needs
Start Cassandra on localhost
Set PATH so that cqlsh and cassandra-cli binaries can be found
From the build directory, run
This page is licensed: CC BY-SA / Gnu FDL
Legacy Cassandra storage engine description. Cassandra was removed from MariaDB in MariaDB 10.6.
These are instructions on how exactly we build Cassandra SE packages.
See How_to_access_buildbot_VMs page on the internal wiki. The build VM to use is
Get into the VM and continue to next section.
Create another SSH connection to terrier, run the script suggested by motd.
Press (C-a C-c) to create another window
Copy the base bazaar repository into the VM:
Then, get back to the window with VM, and run in VM:
This should end with:
Free up some disk space:
Verify that mysqld was built with Cassandra SE:
This should point to libthrift-0.8.0.so.
In the second window (the one that's on terrier, but not in VM), run:
This page is licensed: CC BY-SA / Gnu FDL
Legacy Cassandra storage engine description. Cassandra was removed from MariaDB in MariaDB 10.6.
This page documents status variables related to the Cassandra storage engine. See Server Status Variables for a complete list of status variables that can be viewed with SHOW STATUS.
Cassandra_multiget_keys_scannedDescription: Number of keys we've made lookups for.
Scope: Global, Session
Data Type: numeric
Cassandra_multiget_readsDescription: Number of read operations.
Scope: Global, Session
Data Type: numeric
Cassandra_multiget_rows_readDescription: Number of rows actually read.
Scope: Global, Session
Data Type: numeric
Cassandra_network_exceptionsDescription: Number of network exceptions.
Scope: Global, Session
Data Type: numeric
Introduced:
Cassandra_row_insert_batchesDescription: Number of insert batches performed.
Scope: Global, Session
Data Type: numeric
Cassandra_row_insertsDescription: Number of rows inserted.
Scope: Global, Session
Data Type: numeric
Cassandra_timeout_exceptionsDescription: Number of Timeout exceptions we got from Cassandra.
Scope: Global, Session
Data Type: numeric
Cassandra_unavailable_exceptionsDescription: Number of Unavailable exceptions we got from Cassandra.
Scope: Global, Session
Data Type: numeric
This page is licensed: CC BY-SA / Gnu FDL
Legacy Cassandra storage engine description. Cassandra was removed from MariaDB in MariaDB 10.6.
This page lists difficulties and peculiarities of Cassandra Storage Engine. I'm not putting them into bug tracker because it is not clear whether these properties should be considered bugs.
There seems to be no way to get even a rough estimate of how many different keys are present in a column family. I'm using an arbitrary value of 1000 now, which causes
EXPLAIN will always show rows=1000 for full table scans. In the future, this may cause poor query plans.
DELETE FROM table always prints "1000 rows affected", with no regards how many records were actually there in the table.
We could use the new feature to get some data statistics.
This page is licensed: CC BY-SA / Gnu FDL
Legacy Cassandra storage engine description. Cassandra was removed from MariaDB in MariaDB 10.6.
Julien Duponchelle has made a virtual machine available for testing the Cassandra storage engine. Find out more at Cassandra MariaDB VirtualBox.
The virtual machine is based on:
Ubuntu 12.04
DataStax Cassandra
+ Cassandra
is used for setup. Full are at Julien's website.
This page is licensed: CC BY-SA / Gnu FDL
Legacy Cassandra storage engine description. Cassandra was removed from MariaDB in MariaDB 10.6.
These are possible future directions for Cassandra Storage Engine. This is mostly brainstorming, nobody has committed to implementing this.
Unlike MySQL/MariaDB, Cassandra is not suitable for transaction processing. (the only limited scenario that they handle well is append-only databases)
Instead, they focus on ability to deliver real-time analytics. This is achieved via the following combination:
Insert/update operations are reduced to inserting a newer version, their implementation (SSTree) allows to make lots of updates at a low cost
The data model is targeted at denormalized data, cassandra docs and user stories all mention the practice of creating/populating a dedicated column family (=table) for each type of query you're going to run.
In other words, Cassandra encourages creation/use of materialized VIEWs. Having lots of materialized VIEWs makes updates expensive, but their low cost and non-conflicting nature should offset that.
How does one use Cassandra together with an SQL database? I can think of these use cases:
The "OLTP as in SQL" is kept in the SQL database.
Data that's too big for SQL database (such as web views, or clicks) is stored in Cassandra, but can also be accessed from SQL.
As an example, one can think of a web shop, which provides real-time peeks into analytics data, like amazon's "people who looked at this item, also looked at ..." , or "today's best sellers in this category are ...", etc
Generally, CQL (Cassandra Query Language) allows to query Cassandra's data in an SQL-like fashion.
Access through a storage engine will additionally allow:
to get all of the data from one point, instead of rwo
joins betwen SQL and Cassandra's data might be more efficent due to Batched Key Access (this remains to be seen)
??
Suppose, all of the system's data is actually stored in an OLTP SQL database.
Cassandra is only used as an accelerator for analytical queries. Cassandra won't allow arbitrary, ad-hoc dss-type queries, it will require the DBA to create and maintain appropriate column families (however, it is supposed to give a nearly-instant answers to analytics-type questions).
Tasks that currently have no apparent solutions:
There is no way to replicate data from MySQL/MariaDB into Cassandra. It would be nice if one could update data in MySQL and that would cause appropriate inserts made into all relevant column families in Cassandra.
...
This page is licensed: CC BY-SA / Gnu FDL
Legacy Cassandra storage engine description. Cassandra was removed from MariaDB in MariaDB 10.6.
Joins with data stored in a Cassandra database are only possible on the MariaDB side. That is, if we want to compute a join between two tables, we will:
Read the relevant data for the first table.
Based on data we got in #1, read the matching records from the second table.
Either of the tables can be an InnoDB table, or a Cassandra table. In case the second table is a Cassandra table, the Cassandra Storage Engine allows to read matching records in an efficient way.
All this is targeted at running joins which touch small fraction of the tables. The expected typical use-case looks like this:
The primary data is stored in MariaDB (ie. in InnoDB)
There is also some extra data stored in Cassandra (e.g. hit counters)
The user accesses data in MariaDB (think of a website and a query like:
Cassandra SE allows to grab some Cassandra data, as well. One can write things like this:
which is much easier to do than to use Thrift API.
If the user wants to run huge joins that touch a big fraction of table's data, for example:
"What are top 10 countries that my website had visitors from in the last month"?
or
"Go through last month's orders and give me top 10 selling items"
then Cassandra Storage engine is not a good answer. Queries like this are answered in two ways:
Design their schema in Cassandra in such a way that allows to get this data in one small select. No kidding. This is what Cassandra is targeted at, they explicitly recommend that Cassandra schema design starts with the queries.
If the query doesn't match Cassandra's schema, they need to run Hive (or Pig), which have some kind of distributed join support. Hive/Pig compile queries to Map/reduce job which are ran across the whole cluster, so they will certainly beat Cassandra Storage Engine which runs on one mysqld node (you can have multiple mysqld nodes of course, but they will not cooperate with one another).
It is possible to run Hive/Pig on Cassandra.
This page is licensed: CC BY-SA / Gnu FDL
ezvm precise-amd64-buildINCLUDE_DIRECTORIESexport LIBS="-lthrift", on another machine it was "-lthrift -ldl"
export LDFLAGS=-L/path/to/thrift/libs
Build the server
we used BUILD/compile-pentium-max script (the name is for historic reasons. It will actually build an optimized amd64 binary)
There were two TokuDB specific mailing lists run by Tokutek:
tokudb-user - a mailing list for users of TokuDB
tokudb-dev - a mailing list TokuDB contributors
These were discontinued and redirected to Percona's MySQL and MariaDB forum.
Tokutek has a #tokutek IRC channel on Freenode.
The following link will take you to the #tokutek channel using Freenode's web IRC client:
The Where to find other MariaDB users and developers page has many great MariaDB resources listed.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB [j1]> delete from t1;
Query OK, 1000 rows affected (0.14 sec)SELECT * FROM user_accounts WHERE username='joe')Legacy Cassandra storage engine description. Cassandra was removed from MariaDB in MariaDB 10.6.
This page documents system variables related to the Cassandra storage engine. See Server System Variables for a complete list of system variables and instructions on setting them.
cassandra_default_thrift_hostDescription: Host to connect to, if not specified on per-table basis.
Scope: Global
Dynamic: Yes
Data Type: string
cassandra_failure_retriesDescription: Number of times to retry on timeout/unavailable failures.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
cassandra_insert_batch_sizeDescription: INSERT batch size.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
cassandra_multiget_batch_sizeDescription: Batched Key Access batch size.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
cassandra_read_consistencyDescription: Consistency to use for reading. See for details.
Scope: Global, Session
Default Value: ONE
Valid Values: ONE, TWO
cassandra_rnd_batch_sizeDescription: Full table scan batch size.
Scope: Global, Session
Default Value: 10000
Valid Values: 1 to 1073741824
cassandra_write_consistencyDescription: Consistency to use for writing. See for details.
Scope: Global, Session
Default Value: ONE
Valid Values: ONE, TWO
This page is licensed: CC BY-SA / Gnu FDL
The TokuDB storage engine has been removed from MariaDB.
Because we, the MariaDB developers, don't want to add a lot of new features or big code changes to a stable release, not all TokuDB features are merged at once into MariaDB. Instead they are added in stages.
On this page we list all the known differences between the TokuDB from and the default MariaDB version from :
TokuDB is not the default storage engine.
cd mysql-test
./mysql-test-run t/cassandra.testmkdir build
cd build
wget https://dist.apache.org/repos/dist/release/thrift/0.8.0/thrift-0.8.0.tar.gz
sudo apt-get install bzr
sudo apt-get install flex
tar zxvf thrift-0.8.0.tar.gz
cd thrift-0.8.0/
./configure --prefix=/home/buildbot/build/thrift-inst --without-qt4 --without-c_glib --without-csharp --without-java --without-erlang --without-python --without-perl --without-php --without-php_extension --without-ruby --without-haskell --without-go --without-d
make
make install
# free some space
make clean
cd ..scp /home/psergey/5.5-cassandra-base.tgz runvm:tar zxvf ../5.5-cassandra-base.tgz
rm -rf ../5.5-cassandra-base.tgz
cd 5.5-cassandra/
bzr pull lp:~maria-captains/maria/5.5-cassandraexport LIBS="-lthrift"
export LDFLAGS=-L/home/buildbot/build/thrift-inst/lib
mkdir mkdist
cd mkdist
cmake ..
make distbasename mariadb-*.tar.gz .tar.gz > ../distdirname.txt
cp mariadb-5.5.25.tar.gz ../
cd ..
tar zxf "mariadb-5.5.25.tar.gz"
mv "mariadb-5.5.25" build
cd build
mkdir mkbin
cd mkbin
cmake -DBUILD_CONFIG=mysql_release ..
make -j4 packageCPack: - package: /home/buildbot/build/5.5-cassandra/build/mkbin/mariadb-5.5.25-linux-x86_64.tar.gz generated.rm -fr ../../mkdist/mv mariadb-5.5.25-linux-x86_64.tar.gz ../..
cd ../..
rm -rf buildmkdir fix-package
cd fix-package
tar zxvf ../mariadb-5.5.25-linux-x86_64.tar.gzldd mariadb-5.5.25-linux-x86_64/bin/mysqldcp /home/buildbot/build/thrift-inst/lib/libthrift* mariadb-5.5.25-linux-x86_64/lib/
tar czf mariadb-5.5.25-linux-x86_64.tar.gz mariadb-5.5.25-linux-x86_64/
cp mariadb-5.5.25-linux-x86_64.tar.gz ..mkdir build-cassandra
cd build-cassandra
scp runvm:/home/buildbot/build/5.5-cassandra/mariadb-5.5.25.tar.gz .
scp runvm:/home/buildbot/build/5.5-cassandra/mariadb-5.5.25-linux-x86_64.tar.gz .SELECT
user_accounts.*,
cassandra_table.some_more_fields
FROM
user_accounts, cassandra_data
WHERE
user_accounts.username='joe' AND
user_accounts.user_id= cassandra_table.user_idTokuDB has been deprecated by its upstream maintainer. It is disabled from MariaDB 10.5 and has been removed in MariaDB 10.6 - MDEV-19780. We recommend MyRocks as a long-term migration path.
TokuDB has been deprecated by its upstream maintainer. It is disabled from MariaDB 10.5 and has been removed in MariaDB 10.6 - MDEV-19780. We recommend MyRocks as a long-term migration path.
TokuDB has been deprecated by its upstream maintainer. It is disabled from MariaDB 10.5 and has been removed in MariaDB 10.6 - MDEV-19780. We recommend MyRocks as a long-term migration path.
Default Value: 3
Valid Values: 1 to 1073741824
Default Value: 100
Valid Values: 1 to 1073741824
Default Value: 100
Valid Values: 1 to 1073741824
THREEANYALLQUORUMEACH_QUORUMLOCAL_QUORUMTHREEANYALLQUORUMEACH_QUORUMLOCAL_QUORUMIf you want to enable this, you have to start mysqld with: --default-storage-engine=tokudb.
Auto increment for second part of a key behaves as documented (and as it does in MyISAM and other storage engines).
The DDL syntax is different. While binaries from Tokutek have the patched SQL parser, TokuDB in MariaDB uses the special Storage Engine API extension. Thus in Tokutek binaries you write CLUSTERED KEY (columns) and, for example, ROW_FORMAT=TOKUDB_LZMA. And in MariaDB you write KEY (columns) CLUSTERING=YES and COMPRESSION=TOKUDB_LZMA.
No online ALTER TABLE.
All alter table that changes data or indexes requires a table copy.
No online OPTIMIZE TABLE.
No INSERT NOAR or UPDATE NOAR commands.
No gdb stack trace on sigsegv
IMPORTANT: the compression type does not default to the session variable as it does with Tokutek's builds. If COMPRESSION= is not included in CREATE TABLE or ALTER TABLE ENGINE=TokuDB then the TokuDB table are uncompressed (before 5.5.37) or zlib-compressed (5.5.37 and later).
(starting from 10.0.5) has online ALTER TABLE. So the features missing are:
No INSERT NOAR or UPDATE NOAR commands.
We are working with Tokutek to improve this feature before adding it to MariaDB.
No online OPTIMIZE TABLE before (r4199)
No gdb stack trace on sigsegv
Before 10.0.10 the compression type did not default to the session variable. If COMPRESSION= was not included in CREATE TABLE or ALTER TABLE ENGINE=TokuDB then the TokuDB table was created uncompressed.
This is found on the TokuDB page.
This page is licensed: CC BY-SA / Gnu FDL
TokuDB has been deprecated by its upstream maintainer. It is disabled from MariaDB 10.5 and has been removed in MariaDB 10.6 - MDEV-19780. We recommend MyRocks as a long-term migration path.
The TokuDB storage engine has been removed from MariaDB.
TokuDB has been deprecated by its upstream maintainer. It is disabled from MariaDB 10.5 and has been removed in MariaDB 10.6 - . We recommend as a long-term migration path.
Note that ha_tokudb is not included in binaries built with the "old" glibc. Binaries built with glibc 2.14+ do include it.
The following sections detail how to install and enable TokuDB.
Until MariaDB versions 5.5.39 and 10.0.13, before upgrading TokuDB, the server needed to be cleanly shut down. If the server was not cleanly shut down, TokuDB would fail to start. Since 5.5.40 and 10.0.14, this has no longer been necessary. See MDEV-6173.
TokuDB has been included with MariaDB since and and does not require separate installation. Proceed straight to Check for Transparent HugePage Support on Linux. For older versions, see the distro-specific instructions below.
In , , and starting from TokuDB is in a separate RPM package
called MariaDB-tokudb-engine and is installed as follows:
On Ubuntu, TokuDB is available on the 64-bit versions of Ubuntu 12.10 and newer. On Debian, TokuDB is available on the 64-bit versions of Debian 7 "Wheezy" and newer.
The package is installed as follows:
In some earlier versions, from and , TokuDB is in a separate package calledmariadb-tokudb-engine-x.x, where x.x is the MariaDB series (5.5 or10.0). The package is installed, for example on 5.5, as follows:
TokuDB requires the libjemalloc library (currently version 3.3.0 or greater).
libjemalloc should automatically be installed when using a package manager, and is loaded by restarting MariaDB.
It can be enabled, if not already done, by adding the following to the my.cnf configuration file:
If you don't do the above, you will get an error similar to the following one in your error file
Transparent hugepages is a feature in newer linux kernel versions that causes problems for the memory usage tracking calculations in TokuKV and can lead to memory overcommit. If you have this feature enabled, TokuKV will not start, and you should turn it off.
You can check the status of Transparent Hugepages as follows:
If the path does not exist, Transparent Hugepages are not enabled and you may continue.
Alternatively, the following are returned:
indicating Transparent Hugepages are not enabled and you may continue. If the following is returned:
Transparent Hugepages are enabled, and you will need to disable them.
To disable them, pass "transparent_hugepage=never" to the kernel in your bootloader (grub, lilo, etc.). For example, for SUSE, add transparent_hugepage=never to Optional Kernel Command Line Parameter at the end, such as after "showopts", and press OK. The setting will take effect on the next reboot.
You can also disable with:
On Centos or RedHat you can do:
Add line GRUB_CMDLINE_LINUX_DEFAULT="transparent_hugepage=never" to file /etc/default/grub
Update grub (boot loader):
For more information, see
Attempting to enable TokuDB while Linux Transparent HugePages are enabled will fail with an error such as:
See the section above; .
The before attempting to enable TokuDB. Strictly speaking, the XA code requires two XA-capable storage engines, and this is checked at startup. In practice, this requires InnoDB and the binary log to be active. If it isn't, the following warning are returned and XA features are disabled:
MariaDB's default my.cnf files come with a section for
TokuDB. To enable TokuDB just remove the '#' comment markers from the options
in the TokuDB section.
A typical TokuDB section looks like the following:
By default, the plugin-load option is commented out. Simply un-comment it
as in the example above.
Don't forget to also enable jemalloc in the config file.
With these changes done, you can restart MariaDB to activate TokuDB.
Instead of putting the TokuDB section in the main my.cnf file, it is
placed in a separate file located at: /etc/my.cnf.d/tokudb.cnf
Instead of putting the TokuDB section in the main my.cnf file, it is
placed in a separate file located at: /etc/mysql/conf.d/tokudb.cnf
Generally, it is recommended to use one of the above methods to enable the
TokuDB storage engine, but it is also possible to enable it manually as with
other plugins. To do so, launch the mysql command-line client and connect to
MariaDB as a user with the SUPER privilege and execute the following
command:
TokuDB are installed until someone executes .
If you just want to test TokuDB, you can start the mysqld server with TokuDB with the following command:
.
This page is licensed: CC BY-SA / Gnu FDL
sudo yum install MariaDB-tokudb-enginesudo apt-get install mariadb-plugin-tokudbsudo apt-get install mariadb-tokudb-engine-5.5[mysqld_safe]
malloc-lib= /path/to/jemalloc2018-11-19 18:46:26 0 [ERROR] mysqld: Can't open shared library '/home/my/maria-10.3/mysql-test/var/plugins/ha_tokudb.so' (errno: 2, /usr/lib64/libjemalloc.so.2: cannot allocate memory in static TLS block)cat /sys/kernel/mm/transparent_hugepage/enabledalways madvise [never][always] madvise neverecho never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defraggrub2-mkconfig -o /boot/grub2/grub.cfg "$@"ERROR 1123 (HY000): Can't initialize function 'TokuDB'; Plugin initialization function failedCannot enable tc-log at run-time. XA features of TokuDB are disabled# See https://mariadb.com/kb/en/how-to-enable-tokudb-in-mariadb/
# for instructions how to enable TokuDB
#
# See https://mariadb.com/kb/en/tokudb-differences/ for differences
# between TokuDB in MariaDB and TokuDB from http://www.tokutek.com/
plugin-load=ha_tokudb[mysqld_safe]
malloc-lib= /path/to/jemallocINSTALL SONAME 'ha_tokudb';mysqld --plugin-load=ha_tokudb --plugin-dir=/usr/local/mysql/lib/mysql/pluginLegacy Cassandra storage engine description. Cassandra was removed from MariaDB in MariaDB 10.6.
If using the YUM repositories on Fedora, Red Hat, or CentOS, first install the Cassandra storage engine package with:
If using the Debian or Ubuntu repositories, the Cassandra plugin is in the main MariaDB server package.
To install/activate the storage engine into MariaDB, issue the following command:
You can also activate the storage engine by using the --plugin-load command on server startup.
The Cassandra Storage Engine allows access to data in a Cassandra cluster from MariaDB. The overall architecture is shown in the picture below and is similar to that of the NDB cluster storage engine.
You can access the same Cassandra cluster from multiple MariaDB instances, provided each of them runs the Cassandra Storage Engine:
The primary goal of Cassandra SE (Storage Engine) is data integration between the SQL and NoSQL worlds. Have you ever needed to:
grab some of Cassandra's data from your web frontend, or SQL query?
insert a few records into Cassandra from some part of your app?
Now, this is easily possible. Cassandra SE makes Cassandra's column family appear as a table in MariaDB that you can insert to, update, and select from. You can write joins against this table; it is possible to join data that's stored in MariaDB with data that's stored in Cassandra.
The Cassandra Query Language (CQL) is the best way to work with Cassandra. It resembles SQL on first glance; however, the resemblance is very shallow. CQL queries are tightly bound to the way Cassandra accesses its data internally. For example, you can't have even the smallest join. In fact, adding a mere... AND non_indexed_column=1 into a WHERE clause is already invalid CQL.
Our goal is to let one work in SQL instead of having to move between CQL and SQL all the time.
No. Cassandra SE is not suitable for running analytics-type queries that sift through huge amounts of data in a Cassandra cluster. That task is better handled by Hadoop-based tools like Apache Pig or Apache Hive. Cassandra SE is rather a "window" from an SQL environment into NoSQL.
Let's get specific. In order to access Cassandra's data from MariaDB, one needs to create a table with engine=cassandra. The table will represent a view of a Column Family in Cassandra and its definition will look like so:
The name of the table can be arbitrary. However, primary key, column names, and types must "match" those of Cassandra.
The table must define a column that corresponds to the Column Family's rowkey.
If Cassandra's rowkey has an alias (or name), then MariaDB's column must
have the same name.
Otherwise, it must be named "rowkey".
The type of MariaDB's column must match the validation_class of Cassandra's rowkey (datatype matching is covered in more detail below).
Note: Multi-column primary keys are currently not supported. Support may be added in a future version, depending on whether there is a demand for it.
Cassandra allows one to define a "static column family", where column metadata is defined in the Column Family header and is obeyed by all records.
These "static" columns can be mapped to regular columns in MariaDB. A static column named 'foo' in Cassandra should have a counterpart named 'foo' in MariaDB. The types must also match; they are covered below.
Cassandra also allows individual rows to have their own sets of columns. In other words, each row can have its own unique columns.
These columns can be accessed through MariaDB's feature. To do so, one must define a column:
with an arbitrary name
of type blob
with the DYNAMIC_COLUMN_STORAGE=yes attribute
Here is an example:
Once define, one can access individual columns with the new variant of the Dynamic Column functions, which now support string names (they used to support integers only).
Cassandra's SuperColumns are not supported, there are currently no plans to support them.
There is no direct 1-to-1 mapping between Cassandra's datatypes and MySQL/MariaDB datatypes. Also, Cassandra's size limitations are often more relaxed than MySQL/MariaDB's. For example, Cassandra's limit on rowkey length is about 2G, while MySQL limits unique key length to about 1.5Kb.
The types must be mapped as follows:
For types like "VARBINARY(n)", n should be chosen sufficiently large to accommodate all the data that is encountered in the table.
Cassandra doesn't provide any practical way to make INSERT different from UPDATE. Therefore, INSERT works as INSERT-or-UPDATE, it will overwrite the data, if necessary.
INSERT ... SELECT and multi-line INSERT will try to write data in batches. Batch size is controlled by the system variable, which specifies the max. batch size in columns.
The status variables and allow one to see whether inserts are actually batched.
UPDATE works like one would expect SQL's UPDATE command to work (i.e. changing a primary key value will result in the old record being deleted and a new record being inserted)
DELETE FROM cassandra_table maps to the truncate(column_family) call.
The DELETE with WHERE clause will do per-row deletions.
Generally, all SELECT statements work like one expects SQL to work. Conditions in the form primary_key=... allow the server to construct query plans which access Cassandra's rows with key lookups.
Full table scans are performed in a memory-efficient way. Cassandra SE performs a full table scan as a series of batches, each of which reads not more than records.
Cassandra supports Batched Key Access in no-association mode. This means that it requires the SQL layer to do hashing, which means the following settings are required:
optimizer_switch='join_cache_hashed=on'
join_cache_level=7|8
Cassandra SE is currently unable to make use of space in the join buffer (the one whose size is controlled by ). Instead, it will limit read batches to reading not more than at a time, and memory are allocated on the heap.
Note that the buffer is still needed by the SQL layer, so its value should still be increased if you want to read in big batches.
It is possible to track the number of read batches, how many keys were looked-up, and how many results were produced with these status variables:
The following are available:
The following are available:
Cassandra 1.2 has slightly changed its data model, as described at . This has caused some of Thrift-based clients to no longer work (for example, here's a problem experienced by Pig:).
Currently, Cassandra SE is only able to access Cassandra 1.2's column families that were defined WITH COMPACT STORAGE attribute.
Slides from talk at Percona Live 2013 -
- JIRA task for Cassandra SE work
This page is licensed: CC BY-SA / Gnu FDL
yum install MariaDB-cassandra-engineinstall soname 'ha_cassandra.so';uuid
CHAR(36), the UUID are represented in text form on the MariaDB side
timestamp
TIMESTAMP (second precision), TIMESTAMP(6) (microsecond precision), BIGINT (gets verbatim Cassandra's 64-bit milliseconds-since-epoch)
boolean
BOOL
float
FLOAT
double
DOUBLE
decimal
VARBINARY(n)
counter
BIGINT, only reading is supported
Consistency to use for writing
Number of Unavailable exceptions we got from Cassandra
Cassandra SE 1.8
Experimental
blob
BLOB, VARBINARY(n)
ascii
BLOB, VARCHAR(n), use charset=latin1
text
BLOB, VARCHAR(n), use charset=utf8
varint
VARBINARY(n)
int
INT
bigint
BIGINT, TINY, SHORT (pick the one that will fit the real data)
0
0
0
Host to connect to, if not specified on per-table basis
Number of times to retry on timeout/unavailable failures
INSERT batch size
Batched Key Access batch size
Full table scan batch size
Consistency to use for reading
Number of rows inserted
Number of insert batches performed
Number of read operations
Number of keys we've made lookups for
Number of rows actually read
Number of Timeout exceptions we got from Cassandra


set cassandra_default_thrift_host='192.168.0.10' -- Cassandra's address. It can also
-- be specified as startup parameter
-- or on per-table basis
create table cassandra_tbl -- table name can be chosen at will
(
rowkey type PRIMARY KEY, -- represents Column Family's rowkey. Primary key
-- must be defined over this column.
column1 type, -- Cassandra's static columns can be mapped to
column2 type, -- regular SQL columns.
dynamic_cols blob DYNAMIC_COLUMN_STORAGE=yes -- If you need to access Cassandra's
-- dynamic columns, you can define
-- a blob which will receive all of
-- them, packed as MariaDB's dynamic
-- columns.
) engine=cassandra
keyspace= 'cassandra_key_space' -- Cassandra's keyspace.columnFamily we
column_family='column_family_name'; -- are accessing.dynamic_cols blob DYNAMIC_COLUMN_STORAGE=yesThe TokuDB storage engine has been removed from MariaDB.
TokuDB has been deprecated by its upstream maintainer. It is disabled from MariaDB 10.5 and has been removed in MariaDB 10.6 - . We recommend as a long-term migration path.
This page lists system variables that are related to TokuDB.
See Server System Variables for a complete list of system variables and instructions on setting them, and Full list of MariaDB options, system and status variables for a complete list of all options, statis variable and system variables in MariaDB.
tokudb_alter_print_errorDescription: Print errors for alter table operations.
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
tokudb_analyze_timeDescription: Time in seconds that operations spend on each index when calculating cardinality. Accurate cardinality helps in particular with the performance of complex queries. If no analyzes are run, cardinality are 1 for primary indexes, and unknown (NULL) for other types of indexes.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
tokudb_block_sizeDescription: Uncompressed size of internal fractal tree and leaf nodes. Changing will only affect tables created after the new setting is in effect. Existing tables will keep the setting they were created with unless the table is dumped and reloaded.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
tokudb_bulk_fetchDescription: If set to 1 (the default), the bulk fetch algorithm is used for SELECT's and DELETE's, including related statements such as INSERT INTO.
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
tokudb_cache_sizeDescription: Size in bytes of the TokuDB cache. This variable is read-only and cannot be changed dynamically. To change the value, either set the value in the my.cnf file prior to loading TokuDB or
restart MariaDB after modifying the configuration. If you have loaded the
plugin but not used TokuDB yet, you can unload the plugin then reload it and
MariaDB will reload the plugin with the setting from the configuration file. Setting to at least half of the available memory is recommended, although if using directIO instead of buffered IO (see ) , up to 80% of the available memory is recommended. Decrease if other applications require significant memory or swapping is degrading performance.
Dynamic: No
Data Type: numeric
tokudb_check_jemallocDescription: Check if jemalloc is linked.
Scope: Global
Dynamic: Yes
Data Type: numeric
tokudb_checkpoint_lockDescription: Mechanism to lock out TokuDB checkpoints. When set to 1, TokuDB checkpoints are locked out. Setting to 0, or disconnecting the client, releases the lock.
Scope: Global, Session
Dynamic: Yes
tokudb_checkpoint_on_flush_logsDescription: TokuDB checkpoint on flush logs.
Scope: Global
Dynamic: Yes
Data Type: boolean
tokudb_checkpointing_periodDescription: Time in seconds between the beginning of each checkpoint. It is recommended to leave this at the default setting of 1 minute.
Scope: Global
Dynamic: Yes
Data Type: numeric
tokudb_cleaner_iterationsDescription: Number of internal nodes processed in each cleaner thread period (see ). Setting to 0 turns off cleaner threads.
Scope: Global
Dynamic: Yes
Data Type: numeric
tokudb_cleaner_periodDescription: Frequency in seconds for the running of the cleaner thread. Setting to 0 turns off cleaner threads.
Scope: Global
Dynamic: Yes
Data Type: numeric
tokudb_commit_syncDescription: Whether or not the transaction log is flushed upon transaction commit. Flushing has a minor performance penalty, but switching it off means that committed transactions may not survive a server crash.
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
tokudb_create_index_onlineDescription: Whether indexes are hot or not. Hot, or online, indexes (the default) mean that the table is available for inserting and updates while the index is being created. It is slower to create hot indexes.
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
tokudb_data_dirDescription: Directory where the TokuDB data is stored. By default the variable is empty, in which case the regular is used.
Dynamic: No
Data Type: string
Default Value: Empty (the MariaDB datadir is used)
tokudb_debugDescription: Setting to a non-zero value turns on various TokuDB debug traces.
Scope: Global
Dynamic: Yes
Data Type: numeric
tokudb_directioDescription: When set to ON, TokuDB writes use Direct IO instead of Buffered IO. should be adjusted when using DirectIO.
Dynamic: No
Data Type: boolean
Default Value: OFF
tokudb_disable_hot_alterDescription: If set to ON (OFF is default), hot alter table is disabled.
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
tokudb_disable_prefetchingDescription: If prefetching is not disabled (the default), range queries usually benefit from aggressive prefetching of blocks of rows. For range queries with LIMIT clauses, this can create unnecessary IO, and so prefetching can be disabled if these make up a majority of range queries.
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
tokudb_disable_slow_alterDescription: Usually, TokuDB permits column addition, deletion, expansion, and renaming with minimal locking, very quickly. This variable determines whether certain slow [alter|ALTER]] table statements that cannot take advantage of this feature are permitted. Statements that are slow are those that include a mix of column additions, deletions or expansions, for example, ALTER TABLE t1 ADD COLUMN c1 int, DROP COLUMN c2.
Scope: Global, Session
Dynamic: Yes
tokudb_empty_scanDescription: TokuDB algorithm to check if the table is empty when opened. Setting to disabled will reduce this overhead.
Scope: Global, Session
Dynamic: Yes
Data Type: enum
tokudb_fs_reserve_percentDescription: If this percentage of the filesystem is not free, inserts are prohibited. Recommended value is half the size of the available memory. Once disabled, inserts are re-enabled once twice the reserve is available. TokuDB will freeze entirely if the disk becomes entirely full.
Scope: Global
Dynamic: No
Data Type: numeric
tokudb_fsync_log_periodDescription: fsync() operations frequency in milliseconds. If set to 0, the default, control fsync() behavior.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Warning: currently values in the 1000-2000 range seem to cause server crashes, see
tokudb_hide_default_row_formatDescription: Hide the default row format.
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
tokudb_killed_timeDescription: Control lock tree kill callback frequency.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
tokudb_last_lock_timeoutDescription: Empty by default, when a lock deadlock is detected, or a lock request times out, set to a JSON document describing the most recent lock conflict. Only set when the first bit of is set.
Scope: Global, Session
Dynamic: Yes
Data Type: text
tokudb_load_save_spaceDescription: If set to 1, the default, bulk loader intermediate data is compressed, otherwise it is uncompressed. Also see .
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
tokudb_loader_memory_sizeDescription: Memory limit for each loader instance used by the TokuDB bulk loader. Memory is taken from the TokuDB cache (), so current cache data may need to be cleared for the loader to begin. Increase if tables are very larger, with multiple secondary indexes.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
tokudb_lock_timeoutDescription: Time in milliseconds that a transaction will wait for a lock held by another transaction to be released before timing out with a lock wait timeout error (-30994). Setting to 0 disables lock waits.
Scope: Global, Session
Dynamic: Yes
tokudb_lock_timeout_debugDescription: When bit zero is set (default 1), a JSON document describing the most recent lock conflict is reported to . When set to 0, no lock conflicts are reported. When bit one is set, the JSON document is printed to the .
Scope: Global, Session
Dynamic: Yes
tokudb_log_dirDescription: Directory where the TokuDB log files are stored. By default the variable is empty, in which case the regular is used.
Dynamic: No
Data Type: string
Default Value: Empty (the MariaDB datadir is used)
tokudb_max_lock_memoryDescription: Max memory for locks.
Scope: Global, Session
Dynamic: No
Data Type: numeric
tokudb_optimize_index_fractionDescription: When deleting a percentage of the tree (useful when the left side of the tree has many deletions, such as a pattern with increasing ids or dates), it's possible to optimize a subset of the fractal tree, as determined by the value of this variable, which ranges from 0.0 to 1.0 (indicating the whole tree).
Scope: Global, Session
Dynamic: Yes
tokudb_optimize_index_nameDescription: If set to an index name, will optimize that single index in a table. Empty by default.
Scope: Global, Session
Dynamic: Yes
Data Type: string
tokudb_optimize_throttleDescription: Table optimization utilizes all available resources by default. This variable allows the table optimization speed to be limited in order to reduce the overall resources used. The limit places an upper bound on the number of fractal tree leaf nodes that are optimized per second. 0, the default, imposes no limit.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
tokudb_pk_insert_modeDescription: Mode for primary key inserts using either REPLACE INTO or on tables with no secondary index, or where all columns in the secondary index are in the primary key. For example PRIMARY KEY (a,b,c), key (b,c)
0: Fast inserts. may not work, and will not work
1: Fast inserts if no triggers are defined, otherwise inserts may be slow. Row-based replication will not work.
tokudb_prelock_emptyDescription: If set to 0 (1 is default), fast bulk loading are switched off. Usually, TokuDB obtains a table lock on empty tables. If, as is usual, only one transaction is loading the table, this speeds up the inserts. However, if many transactions are loading, only one can have access at a time, so setting this to 0, avoiding the lock, will speed inserts up in that situation.
Scope: Global, Session
Dynamic: Yes
tokudb_read_block_sizeDescription: Uncompressed size in bytes of the read blocks of the fractal tree leaves. Changing will only affect tables created after the new setting is in effect. Existing tables will keep the setting they were created with unless the table is dumped and reloaded. Larger values are better for large range scans and higher compressions, while smaller values are better for point and small range scans.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
tokudb_read_buf_sizeDescription: Per-client size in bytes of the buffer used for storing bulk fetched values as part of a large range query. Reduce if there are many simultaneous clients. Setting to 0 disables bulk fetching.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
tokudb_read_status_frequencyDescription: Progress is measured every this many reads for display by . Useful to set to 1 to examine slow queries.
Scope: Global,
Dynamic: Yes
Data Type: numeric
tokudb_row_formatDescription: Compression algorithm used by default to compress data. Can be overridden by a row format specified in the statement. note that the library can be specified directly, or an alias used, the mapping of which may change in future. Note that in , and before , the compression type did not default to this value. See .
tokudb_default, tokudb_zlib: Use the zlib library,
tokudb_fast
tokudb_rpl_check_readonlyDescription: By default, when the slave is in read only mode, row events are run from the binary log using TokuDB's read-free replication (RFR). Setting this variable to OFF turns off the slave read only check, allowing RFR to run when the slave is not read-only. Be careful that you understand the consequences if setting this variable.
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
tokudb_rpl_lookup_rowsDescription: If set to OFF (ON is default), and to ROW and to ON, TokuDB replication slaves will not perform row lookups for update or delete row log events, removing the need for the associated IO.
Scope: Global, Session
Dynamic: Yes
tokudb_rpl_lookup_rows_delayDescription: Can be used to simulate long disk reads by sleeping for the specified time, in microseconds, before the row lookup query. Only useful to change in a test environment.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
tokudb_rpl_unique_checksDescription: If set to OFF (ON is default), and to ROW and to ON, TokuDB replication slaves will skip uniqueness checks on inserts and updates, removing the associated IO.
Scope: Global, Session
Dynamic: Yes
tokudb_rpl_unique_checks_delayDescription: Can be used to simulate long disk reads by sleeping for the specified time, in microseconds, before the row lookup query. Only useful to change in a test environment.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
tokudb_support_xaDescription: Whether or not the prepare phase of an XA transaction performs an fsync().
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
tokudb_tmp_dirDescription: Directory where the TokuDB bulk loaders temporary files are stored. Can be very large, and useful to place on a separate disk. By default the variable is empty, in which case the regular is used. determines whether the data is compressed or not. The error message ERROR 1030 (HY000): Got error 1 from storage engine could indicate that the disk has run out of space.
Dynamic: No
Data Type: directory name
tokudb_versionDescription: The TokuDB version of the plugin included on MariaDB.
Dynamic: No
Data Type: string
tokudb_write_status_frequencyDescription: Progress is measured every this many writes for display by . Useful to set to 1 to examine slow queries.
Scope: Global,
Dynamic: Yes
Data Type: numeric
This page is licensed: CC BY-SA / Gnu FDL
OFFDefault Value: 5
Range: 0 to 4294967295
Default Value: 4194304 (4MB)
Range: 0 to 18446744073709551615
Default Value: ON
Default Value: Half of the total system memory
1Valid Values: 0 and 1
Default Value: OFF
OFF60Range: 0 to 4294967295
Default Value: 5
Range: 0 to 18446744073709551615
Default Value: 1
Range: 0 to 18446744073709551615
Default Value: ON
Default Value: ON
0Range: 0 to 18446744073709551615
Default Value: OFF
Default Value: OFF
Default Value: OFF
Default Value: rl
Valid Values: lr, rl, disabled
Default Value: 5
Default Value: 0
Range: 0 to 18446744073709551615
ON4000Range: 0 to 18446744073709551615
Introduced: TokuDB 7.1.5
Default Value: Empty
Default Value: ON
Default Value: 100000000 (100M)
Range: 0 to 18446744073709551615
Default Value: 4000 (4 seconds)
Range: 0 to 18446744073709551615
Default Value: 1
130653952Default Value: 1.000000
Range: 0.0 to 1.0
Introduced: TokuDB 7.5.5
Introduced: TokuDB 7.5.5
Default Value: 0
Range: 0 to 18446744073709551615
Introduced: TokuDB 7.5.5
2: Slow inserts. Triggers and row-based replication work normally.
Scope: Global, Session
Dynamic: Yes
Data Type: enumerated
Default Value: 1
Valid Values: 0, 1, 2
Data Type: boolean
Default Value: ON
Default Value: 65536 (64KB)
Range: 4096 to 4294967295
Default Value: 131072 (128KB)
Range: 0 to 1048576
Default Value: 10000
Range: 0 to 4294967295
tokudb_quicklztokudb_small, tokudb_lzma: Use the lzma library. the highest compression and highest CPU usage
tokudb_uncompressed: No compression is used.
Scope: Global, Session
Dynamic: Yes
Data Type: enumerated
Default Value: tokudb_zlib
Valid Values: tokudb_default, tokudb_fast, tokudb_small, tokudb_zlib, tokudb_quicklz, tokudb_lzma, tokudb_uncompressed
Default Value: ON
Data Type: boolean
Default Value: ON
Default Value: 0
Data Type: boolean
Default Value: ON
Default Value: 0
ONDefault Value: Empty (the MariaDB datadir is used)
Default Value: 1000
Range: 0 to 4294967295
Legacy Cassandra storage engine description. Cassandra was removed from MariaDB in MariaDB 10.6.
This page is a short demo of what using Cassandra Storage Engine looks like.
First, a keyspace and column family must be created in Cassandra:
cqlsh> CREATE KEYSPACE mariadbtest2
... WITH strategy_class = 'org.apache.cassandra.locator.SimpleStrategy'
... AND strategy_options:replication_factor='1';
cqlsh> USE mariadbtest2;
cqlsh:mariadbtest2> create columnfamily cf1 ( pk varchar primary key, data1 varchar, data2 bigint);
cqlsh:mariadbtest2> select * from cf1;
cqlsh:mariadbtest2>Now, let's try to connect an SQL table to it:
MariaDB [test]> create table t1 (
-> rowkey varchar(36) primary key,
-> data1 varchar(60), data2 varchar(60)
-> ) engine=cassandra thrift_host='localhost' keyspace='mariadbtest2' column_family='cf1';
ERROR 1928 (HY000): Internal error: 'Failed to map column data2 to datatype org.apache.cassandra.db.marshal.LongType'We've used a wrong datatype. Let's try again:
MariaDB [test]> create table t1 (
-> rowkey varchar(36) primary key,
-> data1 varchar(60), data2 bigint
-> ) engine=cassandra thrift_host='localhost' keyspace='mariadbtest2' column_family='cf1';
Query OK, 0 rows affected (0.04 sec)Ok. Let's insert some data:
Let's select it back:
Now, let's check if it can be seen in Cassandra:
Or, in cassandra-cli:
This page is licensed: CC BY-SA / Gnu FDL
MariaDB [test]> insert into t1 values ('rowkey10', 'data1-value', 123456);
Query OK, 1 row affected (0.01 sec)
MariaDB [test]> insert into t1 values ('rowkey11', 'data1-value2', 34543);
Query OK, 1 row affected (0.00 sec)
MariaDB [test]> insert into t1 values ('rowkey12', 'data1-value3', 454);
Query OK, 1 row affected (0.00 sec)MariaDB [test]> select * from t1 where rowkey='rowkey11';
+----------+--------------+-------+
| rowkey | data1 | data2 |
+----------+--------------+-------+
| rowkey11 | data1-value2 | 34543 |
+----------+--------------+-------+
1 row in set (0.00 sec)cqlsh:mariadbtest2> select * from cf1;
pk | data1 | data2
----------+--------------+--------
rowkey12 | data1-value3 | 454
rowkey10 | data1-value | 123456
rowkey11 | data1-value2 | 34543[default@mariadbtest2] list cf1;
Using default limit of 100
Using default column limit of 100
-------------------
RowKey: rowkey12
=> (column=data1, value=data1-value3, timestamp=1345452471835)
=> (column=data2, value=454, timestamp=1345452471835)
-------------------
RowKey: rowkey10
=> (column=data1, value=data1-value, timestamp=1345452467728)
=> (column=data2, value=123456, timestamp=1345452467728)
-------------------
RowKey: rowkey11
=> (column=data1, value=data1-value2, timestamp=1345452471831)
=> (column=data2, value=34543, timestamp=1345452471831)
3 Rows Returned.
Elapsed time: 5 msec(s).The TokuDB storage engine has been removed from MariaDB.
TokuDB has been deprecated by its upstream maintainer. It is disabled from MariaDB 10.5 and has been removed in MariaDB 10.6 - . We recommend as a long-term migration path.
This page documents status variables related to the TokuDB storage engine. See Server Status Variables for a complete list of status variables that can be viewed with SHOW STATUS.
See also the Full list of MariaDB options, system and status variables.
Tokudb_basement_deserialization_fixed_keyDescription: Number of deserialized basement nodes where all keys were the same size. This leaves the basement in an optimal format for in-memory workloads.
Tokudb_basement_deserialization_variable_keyDescription: Number of deserialized basement nodes where all keys had different size keys, which are not eligible for in-memory optimization.
Tokudb_basements_decompressed_for_writeDescription: Number of basement nodes decompressed for write operations.
Tokudb_basements_decompressed_prefetchDescription: Number of basement nodes decompressed by a prefetch thread. See .
Tokudb_basements_decompressed_prelocked_rangeDescription: Number of basement nodes aggressively decompressed by queries.
Tokudb_basements_decompressed_target_queryDescription: Number of basement nodes decompressed for queries
Tokudb_basements_fetched_for_writeDescription: Number of basement nodes fetched for writes off the disk.
Tokudb_basements_fetched_for_write_bytesDescription: Total basement node bytes fetched for writes off the disk.
Tokudb_basements_fetched_for_write_secondsDescription: Time in seconds spent waiting for IO while fetching basement nodes from disk for writes.
Tokudb_basements_fetched_prefetchDescription: Number of basement nodes fetched off the disk by a prefetch thread.
Tokudb_basements_fetched_prefetch_bytesDescription: Total basement node bytes fetched off the disk by a prefetch thread..
Tokudb_basements_fetched_prefetch_secondsDescription: Time in seconds spent waiting for IO while fetching off the disk by a prefetch thread.
Tokudb_basements_fetched_prelocked_rangeDescription: Number of basement nodes aggressively fetched from disk.
Tokudb_basements_fetched_prelocked_range_bytesDescription: Total basement node bytes aggressively fetched off the disk.
Tokudb_basements_fetched_prelocked_range_secondsDescription: Time in seconds spent waiting for IO while aggressively fetching basement nodes from disk.
Tokudb_basements_fetched_target_queryDescription: Number of basement nodes fetched for queries off the disk.
Tokudb_basements_fetched_target_query_bytesDescription: Total basement node bytes fetched for queries off the disk.
Tokudb_basements_fetched_target_query_secondsDescription: Time in seconds spent waiting for IO while fetching basement nodes from disk for queries.
Tokudb_broadcase_messages_injected_at_rootDescription: Number of broadcast messages injected at root.
Tokudb_buffers_decompressed_for_writeDescription: Number of buffers decompressed for writes.
Tokudb_buffers_decompressed_prefetchDescription: Number of buffers decompressed by a prefetch thread.
Tokudb_buffers_decompressed_prelocked_rangeDescription: Number of buffers decompressed for queries.
Tokudb_buffers_decompressed_target_queryDescription: Number of buffers aggressively decompressed by queries.
Tokudb_buffers_fetched_for_writeDescription: Number of buffers fetched for write off the disk.
Tokudb_buffers_fetched_for_write_bytesDescription: Total buffer bytes fetched for writes off the disk.
Tokudb_buffers_fetched_for_write_secondsDescription: Time in seconds spent waiting for IO while fetching buffers from disk for writes.
Tokudb_buffers_fetched_prefetchDescription: Number of buffers fetched for queries off the disk by a prefetch thread.
Tokudb_buffers_fetched_prefetch_bytesDescription: Total buffer bytes fetched for queries off the disk by a prefetch thread.
Tokudb_buffers_fetched_prefetch_secondsDescription: Time in seconds spent waiting for IO while fetching buffers from disk for queries by a prefetch thread.
Tokudb_buffers_fetched_prelocked_rangeDescription: Number of buffers aggressively fetched for queries off the disk.
Tokudb_buffers_fetched_prelocked_range_bytesDescription: Total buffer bytes aggressively fetched for queries off the disk.
Tokudb_buffers_fetched_prelocked_range_secondsDescription: Time in seconds spent waiting for IO while aggressively fetching buffers from disk for queries.
Tokudb_buffers_fetched_target_queryDescription: Number of buffers fetched for queries off the disk.
Tokudb_buffers_fetched_target_query_bytesDescription: Total buffer bytes fetched for queries off the disk.
Tokudb_buffers_fetched_target_query_secondsDescription: Time in seconds spent waiting for IO while fetching buffers from disk for queries.
Tokudb_cachetable_cleaner_executionsDescription: Number of times the cleaner thread loop has executed.
Tokudb_cachetable_cleaner_iterationsDescription: Number of cleaner operations performed each cleaner period.
Tokudb_cachetable_cleaner_periodDescription: Time in seconds between the end of a group of cleaner operations and the beginning of the next. The TokuDB cleaner thread runs in the background, optimizing indexes and performing work not needing to be done by the client thread.
Tokudb_cachetable_evictionsDescription: Number of blocks evicted from the cache.
Tokudb_cachetable_long_wait_pressure_countDescription: Number of times a thread was stalled for more than one second due to cache pressure.
Tokudb_cachetable_long_wait_pressure_timeDescription: Time in microseconds spent waiting for more than one second for cache pressure to ease.
Tokudb_cachetable_missDescription: Number of times the system failed to access data in the internal cache.
Tokudb_cachetable_miss_timeDescription: Total time in microseconds spent waiting for disk reads (when the cache could not supply the data) to finish.
Tokudb_cachetable_prefetchesDescription: Total number of times that a block of memory was prefetched into the database cache. This happens when it's determined that a block of memory is likely to be accessed by the application.
Tokudb_cachetable_size_cachepressureDescription: Size in bytes of data causing cache pressure, or the sum of the buffers and workdone counters.
Tokudb_cachetable_size_clonedDescription: Memory in bytes currently used for cloned nodes. Dirty nodes are cloned before serialization/compression during checkpoint operations, after which they are written to disk and the memory freed for re-use.
Tokudb_cachetable_size_currentDescription: Size in bytes of the uncompressed data in the cache.
Tokudb_cachetable_size_leafDescription: Size in bytes of the leaf nodes in the cache.
Tokudb_cachetable_size_limitDescription: Size in bytes of the uncompressed data that could fit in the cache.
Tokudb_cachetable_size_nonleafDescription: Size in bytes of the nonleaf nodes in the cache.
Tokudb_cachetable_size_rollbackDescription: Size in bytes of the rollback nodes in the cache.
Tokudb_cachetable_size_writingDescription: Size in bytes currently queued to be written to disk.
Tokudb_cachetable_wait_pressure_countDescription: Number of times a thread was stalled due to cache pressure.
Tokudb_cachetable_wait_pressure_timeDescription: Time in microseconds spent waiting for cache pressure to ease.
Tokudb_checkpoint_begin_timeDescription: Cumulative time in microseconds needed to mark all dirty nodes as pending a checkpoint.
Tokudb_checkpoint_durationDescription: Time in seconds needed to complete all checkpoints.
Tokudb_checkpoint_duration_lastDescription: Time in seconds needed to complete the last checkpoint.
Tokudb_checkpoint_failedDescription: Total number of checkpoints failed.
Tokudb_checkpoint_last_beganDescription: Date and time the most recent checkpoint began, for example Wed May 14 11:26:42 2014 Will be Dec 31, 1969 on Linux if no checkpoint has ever begun.
Tokudb_checkpoint_last_complete_beganDescription: Date and time the last complete checkpoint started.
Tokudb_checkpoint_last_complete_endedDescription: Date and time the last complete checkpoint ended.
Tokudb_checkpoint_long_begin_countDescription: Number of long checkpoint begins (checkpoint begins taking more than 1 second).
Tokudb_checkpoint_long_begin_timeDescription: Total time in microseconds of long checkpoint begins (checkpoint begins taking more than 1 second).
Tokudb_checkpoint_periodDescription: Time in seconds between the end of one automatic checkpoint and the beginning of the next.
Tokudb_checkpoint_takenDescription: Total number of checkpoints taken.
Tokudb_cursor_skip_deleted_leaf_entryDescription: Deleted leaf entries skipped during a range scan.
Introduced: TokuDB 7.5.4
Tokudb_db_closesDescription: Number of db_close operations.
Tokudb_db_open_currentDescription: Number of databases currently open.
Tokudb_db_open_maxDescription: Maximum of number of databases open at the same time.
Tokudb_db_opensDescription: Number of db_open operations.
Tokudb_descriptor_setDescription: Number of times a descriptor was updated when the entire dictionary was updated.
Tokudb_dictionary_broadcast_updatesDescription: Number of successful broadcast updates (an update that affects all rows in a dictionary).
Tokudb_dictionary_updatesDescription: Total number of rows updated in all primary and secondary indexes that have been done with a separate recovery log entry per index.
Tokudb_filesystem_fsync_numDescription: Total number of times the database has flushed the operating system’s file buffers to disk.
Tokudb_filesystem_fsync_timeDescription: Total time in microseconds used to fsync to disk.
Tokudb_filesystem_long_fsync_numDescription: Total number of times the database has flushed the operating system’s file buffers to disk when the operation took more than one second.
Tokudb_filesystem_long_fsync_timeDescription: Total time in microseconds used to fsync to disk when the operation took more than one second.
Tokudb_filesystem_threads_blocked_by_full_diskDescription: Number of threads currently blocked due to attempting to write to a full disk. A warning appears in the disk free space field if not zero.
Tokudb_leaf_compression_to_memory_secondsDescription: Time in seconds spent on compressing leaf nodes.
Tokudb_leaf_decompression_to_memory_secondsDescription: Time in seconds spent on decompressing leaf nodes.
Tokudb_leaf_deserialization_to_memory_secondsDescription: Time in seconds spent on deserializing leaf nodes.
Tokudb_leaf_node_compression_ratioDescription: Ratio of uncompressed bytes in-memory to compressed bytes on-disk for leaf nodes.
Tokudb_leaf_node_full_evictionsDescription: Number of times a full leaf node was evicted from the cache.
Tokudb_leaf_node_full_evictions_bytesDescription: Total bytes freed when a full leaf node was evicted from the cache.
Tokudb_leaf_node_partial_evictionsDescription: Number of times part of a leaf node (a partition) was evicted from the cache.
Tokudb_leaf_node_partial_evictions_bytesDescription: Total bytes freed when part of a leaf node (a partition) was evicted from the cache.
Tokudb_leaf_nodes_createdDescription: Total number of leaf nodes created.
Tokudb_leaf_nodes_destroyedDescription: Total number of leaf nodes destroyed.
Tokudb_leaf_nodes_flushed_checkpointDescription: Number of leaf nodes flushed to disk as part of a checkpoint.
Tokudb_leaf_nodes_flushed_checkpoint_bytesDescription: Size in bytes of leaf nodes flushed to disk as part of a checkpoint.
Tokudb_leaf_nodes_flushed_checkpoint_secondsDescription: Time in seconds spent waiting for IO while writing leaf nodes flushed to disk as part of a checkpoint.
Tokudb_leaf_nodes_flushed_checkpoint_uncompressed_bytesDescription: Size in uncompressed bytes of leaf nodes flushed to disk as part of a checkpoint.
Tokudb_leaf_nodes_flushed_not_checkpointDescription: Number of leaf nodes flushed to disk not as part of a checkpoint.
Tokudb_leaf_nodes_flushed_not_checkpoint_bytesDescription: Size in bytes of leaf nodes flushed to disk not as part of a checkpoint.
Tokudb_leaf_nodes_flushed_not_checkpoint_secondsDescription: Time in seconds spent waiting for IO while writing leaf nodes flushed to disk not as part of a checkpoint.
Tokudb_leaf_nodes_flushed_not_checkpoint_uncompressed_bytesDescription: Size in uncompressed bytes of leaf nodes flushed to disk not as part of a checkpoint.
Tokudb_leaf_serialization_to_memory_secondsDescription: Time in seconds spent on serializing leaf nodes.
Tokudb_loader_num_createdDescription: Number of times a loader has been created.
Tokudb_loader_num_currentDescription: Number of currently existing loaders.
Tokudb_loader_num_maxDescription: Maximum number of loaders that existed at one time.
Tokudb_locktree_escalation_numDescription: Number of times the locktree needed reduce its memory footprint by running lock escalation.
Tokudb_locktree_escalation_secondsDescription: Time in seconds spent performing locktree escalation.
Tokudb_locktree_latest_post_escalation_memory_sizeDescription: Memory size in bytes of the locktree after most recent locktree escalation.
Tokudb_locktree_long_wait_countDescription: Number of times of more than one second duration that a lock could not be acquired as a result of a conflict with another transaction.
Tokudb_locktree_long_wait_escalation_countDescription: Number of times a client thread waited for more than one second for lock escalation to free up memory.
Tokudb_locktree_long_wait_escalation_timeDescription: Time in microseconds of long waits (more than one second) for lock escalation to free up memory.
Tokudb_locktree_long_wait_timeDescription:Total time in microseconds spent by clients waiting for more than one second for a lock conflict to be resolved.
Tokudb_locktree_memory_sizeDescription: Memory in bytes currently being used by the locktree.
Tokudb_locktree_memory_size_limitDescription: Maximum memory in bytes the locktree can use.
Tokudb_locktree_open_currentDescription: Number of currently open locktrees.
Tokudb_locktree_pending_lock_requestsDescription: Number of requests waiting for a lock to be granted.
Tokudb_locktree_sto_eligible_numDescription: Number of locktrees eligible for single transaction optimizations.
Tokudb_locktree_sto_ended_numDescription: Total number of times a single transaction optimization completed early as a result of another transaction beginning.
Tokudb_locktree_sto_ended_secondsDescription: Time in seconds spent ending single transaction optimizations.
Tokudb_locktree_timeout_countDescription: Number of times a lock request timed out.
Tokudb_locktree_wait_countDescription: Number of times a lock could not be acquired as a result of a conflict with another transaction.
Tokudb_locktree_wait_escalation_countDescription: Number of times a client thread has waited on lock escalation. Lock escalation is run on a background thread when the sum of the acquired lock sizes reaches the lock tree limit.
Tokudb_locktree_wait_escalation_timeDescription: Time in microseconds that a client thread spent waiting for lock escalation.
Tokudb_locktree_wait_timeDescription: Total time in microseconds spent by clients waiting for a lock conflict to be resolved.
Tokudb_logger_wait_longDescription:
Tokudb_logger_writesDescription: Number of times the logger has written to disk.
Tokudb_logger_writes_bytesDescription: Total bytes the logger has written to disk.
Tokudb_logger_writes_secondsDescription: Time in seconds spent waiting for IO while writing logs to disk.
Tokudb_logger_writes_uncompressed_bytesDescription: Total uncompressed bytes the logger has written to disk.
Tokudb_mem_estimated_maximum_memory_footprintDescription:
Tokudb_messages_flushed_from_h1_to_leaves_bytesDescription: Total bytes of the messages flushed from h1 nodes to leaves.
Tokudb_messages_ignored_by_leaf_due_to_msnDescription: Number of messages ignored by a leaf because it had already been applied.
Tokudb_messages_in_trees_estimate_bytesDescription: Estimate of the total bytes of messages currently in trees.
Tokudb_messages_injected_at_rootDescription: Number of messages injected at root.
Tokudb_messages_injected_at_root_bytesDescription: Total bytes of messages injected at root for all trees.
Tokudb_nonleaf_compression_to_memory_secondsDescription: Time in seconds spent on compressing non-leaf nodes.
Tokudb_nonleaf_decompression_to_memory_secondsDescription: Time in seconds spent on decompressing non-leaf nodes.
Tokudb_nonleaf_deserialization_to_memory_secondsDescription: Time in seconds spent on deserializing non-leaf nodes.
Tokudb_nonleaf_node_compression_ratioDescription: Ratio of uncompressed bytes in-memory to compressed bytes on-disk for nonleaf nodes.
Tokudb_nonleaf_node_full_evictionsDescription: Number of times a full non-leaf node was evicted from the cache.
Tokudb_nonleaf_node_full_evictions_bytesDescription: Total bytes freed when a full non-leaf node was evicted from the cache.
Tokudb_nonleaf_node_partial_evictionsDescription: Number of times part of a non-leaf node (a partition) was evicted from the cache.
Tokudb_nonleaf_node_partial_evictions_bytesDescription: Total bytes freed when part of a non-leaf node (a partition) was evicted from the cache.
Tokudb_nonleaf_nodes_createdDescription: Total number of non-leaf nodes created.
Tokudb_nonleaf_nodes_destroyedDescription: Total number of non-leaf nodes destroyed.
Tokudb_nonleaf_nodes_flushed_to_disk_checkpointDescription: Number of non-leaf nodes flushed to disk as part of a checkpoint.
Tokudb_nonleaf_nodes_flushed_to_disk_checkpoint_bytesDescription: Size in bytes of non-leaf nodes flushed to disk as part of a checkpoint.
Tokudb_nonleaf_nodes_flushed_to_disk_checkpoint_secondsDescription: Time in seconds spent waiting for IO while writing non-leaf nodes flushed to disk as part of a checkpoint.
Tokudb_nonleaf_nodes_flushed_to_disk_checkpoint_uncompressed_bytesDescription: Size in uncompressed bytes of non-leaf nodes flushed to disk as part of a checkpoint.
Tokudb_nonleaf_nodes_flushed_to_disk_not_checkpointDescription: Number of non-leaf nodes flushed to disk not as part of a checkpoint.
Tokudb_nonleaf_nodes_flushed_to_disk_not_checkpoint_bytesDescription: Size in bytes of non-leaf nodes flushed to disk not as part of a checkpoint.
Tokudb_nonleaf_nodes_flushed_to_disk_not_checkpoint_secondsDescription: Time in seconds spent waiting for IO while writing non-leaf nodes flushed to disk not as part of a checkpoint.
Tokudb_nonleaf_nodes_flushed_to_disk_not_checkpoint_uncompressed_bytesDescription: Size in uncompressed bytes of non-leaf nodes flushed to disk not as part of a checkpoint.
Tokudb_nonleaf_serialization_to_memory_secondsDescription: Time in seconds spent on serializing nonleaf nodes.
Tokudb_overall_node_compression_ratioDescription: Ratio of uncompressed bytes in-memory to compressed bytes on-disk for all nodes.
Tokudb_pivots_fetched_for_queryDescription: Total number of pivot nodes fetched for queries.
Tokudb_pivots_fetched_for_query_bytesDescription: Number of bytes of pivot nodes fetched for queries.
Tokudb_pivots_fetched_for_query_secondsDescription: Time in seconds spent waiting for IO while fetching pivot nodes for queries.
Tokudb_pivots_fetched_for_prefetchDescription: Total number of pivot nodes fetched by a prefetch thread.
Tokudb_pivots_fetched_for_prefetch_bytesDescription: Number of bytes of pivot nodes fetched by a prefetch thread.
Tokudb_pivots_fetched_for_prefetch_secondsDescription: Time in seconds spent waiting for IO while fetching pivot nodes by a prefetch thread.
Tokudb_pivots_fetched_for_writeDescription: Total number of pivot nodes fetched for writes.
Tokudb_pivots_fetched_for_write_bytesDescription: Number of bytes of pivot nodes fetched for writes.
Tokudb_pivots_fetched_for_write_secondsDescription: Time in seconds spent waiting for IO while fetching pivot nodes for writes.
Tokudb_promotion_h1_roots_injected_intoDescription: Number of times a message stopped at a root with a height of 1.
Tokudb_promotion_injections_at_depth_0Description: Number of times a message stopped at a depth of zero.
Tokudb_promotion_injections_at_depth_1Description: Number of times a message stopped at a depth of one.
Tokudb_promotion_injections_at_depth_2Description: Number of times a message stopped at a depth of two.
Tokudb_promotion_injections_at_depth_3Description: Number of times a message stopped at a depth of three.
Tokudb_promotion_injections_lower_than_depth_3Description: Number of times a message stopped at a depth of greater than three.
Tokudb_promotion_leaf_roots_injected_intoDescription: Number of times a message stopped at a root with a height of 0.
Tokudb_promotion_roots_splitDescription: Number of times the root split during promotion.
Tokudb_promotion_stopped_after_locking_childDescription: Number of times a message stopped before a locked child.
Tokudb_promotion_stopped_at_height_1Description: Number of times a message stopped due to reaching a height of one.
Tokudb_promotion_stopped_child_locked_or_not_in_memoryDescription: Number of times a message stopped due to being unable to cheaply access (locked or not stored in memory) a child.
Tokudb_promotion_stopped_child_not_fully_in_memoryDescription: Number of times a message stopped due to being unable to cheaply access (not fully stored in memory) a child.
Tokudb_promotion_stopped_nonempty_bufferDescription: Number of times a message stopped due to reaching a buffer that wasn't empty.
Tokudb_txn_abortsDescription: Number of transactions that have been aborted.
Tokudb_txn_beginDescription: Number of transactions that have been started.
Tokudb_txn_begin_read_onlyDescription: Number of read-only transactions that have been started.
Tokudb_txn_commitsDescription: Number of transactions that have been committed.
This page is licensed: CC BY-SA / Gnu FDL