All pages
Powered by GitBook
Couldn't generate the PDF for 247 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Storage Engines

Understand MariaDB Server's storage engines. Explore the features and use cases of InnoDB, Aria, MyISAM, and other engines to choose the best option for your specific data needs.

ARIA

Learn about the Aria storage engine in MariaDB Server. Understand its features, advantages, and use cases, particularly for crash-safe operations and transactional workloads.

Using CONNECT

The CONNECT storage engine has been deprecated.

CSV

The CSV storage engine stores data in text files using comma-separated values format, allowing easy data exchange with other applications.

InnoDB

Discover InnoDB, the default storage engine for MariaDB Server. Learn about its transaction-safe capabilities, foreign key support, and high performance for demanding workloads.

USING CONNECT - Offline Documentation

The CONNECT storage engine has been deprecated.

This storage engine has been deprecated.

Note: You can download a PDF version of the CONNECT documentation (1.7.0003).

This page is licensed: CC BY-SA / Gnu FDL

Aria Status Variables

A list of status variables specific to the Aria engine, providing metrics on page cache usage, transaction log syncs, and other internal operations.

This page documents status variables related to the . See for a complete list of status variables that can be viewed with .

See also the .

Aria_pagecache_blocks_not_flushed

The Aria Name

A brief history of the naming of the Aria storage engine, explaining its origins as "Maria" and the reasons for the eventual name change.

The storage engine used to be called Maria. This page gives the history and background of how and why this name was changed to Aria.

Backstory

When starting what became the MariaDB project, Monty and the initial developers only planned to work on a next generation storage engine replacement. This storage engine would be crash safe and eventually support transactions. Monty named the storage engine, and the project, after his daughter, Maria.

Work began in earnest on the Maria storage engine but the plans quickly expanded and morphed and soon the developers were not just working on a storage engine, but on a complete branch of the MySQL database. Since the project was already called Maria, it made sense to call the whole database server MariaDB.

Using CONNECT - Condition Pushdown

The CONNECT storage engine has been deprecated.

This storage engine has been deprecated.

The , , , and WMI table types use engine condition pushdown in order to restrict the number of rows returned by the RDBS source or the WMI component.

The CONDITION_PUSHDOWN argument used in old versions of CONNECT is no longer needed because CONNECT uses condition pushdown unconditionally.

This page is licensed: GPLv2

InnoDB Doublewrite Buffer

The doublewrite buffer is a storage area where InnoDB writes pages before writing them to the data file, preventing data corruption from partial page writes.

The doublewrite buffer was implemented to recover from half-written pages. This can happen when there's a power failure while InnoDB is writing a page to disk. On reading that page, InnoDB can discover the corruption from the mismatch of the page checksum. However, in order to recover, an intact copy of the page would be needed.

The double write buffer provides such a copy.

Whenever InnoDB flushes a page to disk, it is first written to the double write buffer. Only when the buffer is safely flushed to disk will InnoDB write the page to the final destination. When recovering, InnoDB scans the double write buffer and for each valid page in the buffer checks if the page in the data file is valid too.

Doublewrite Buffer Settings

FederatedX

FederatedX is a storage engine that allows access to tables on remote MariaDB or MySQL servers as if they were local tables.

This storage engine has been deprecated.

About FederatedXDifferences Between FederatedX and Federated

InnoDB Online DDL

Perform online DDL operations with InnoDB in MariaDB Server. Learn how to alter tables without blocking read/write access, ensuring high availability for your applications.

InnoDB Architecture for MariaDB Enterprise Server

Understand InnoDB's architecture for MariaDB Enterprise Server. This section details its components and their interactions, focusing on performance, scalability, and reliability for enterprise workloa

Description: The number of dirty blocks in the Aria page cache. The global value can be flushed by FLUSH STATUS.
  • Scope: Global

  • Data Type: numeric

  • Aria_pagecache_blocks_unused

    • Description: Free blocks in the Aria page cache. The global value can be flushed by FLUSH STATUS.

    • Scope: Global

    • Data Type: numeric

    Aria_pagecache_blocks_used

    • Description: Blocks used in the Aria page cache. The global value can be flushed by FLUSH STATUS.

    • Scope: Global

    • Data Type: numeric

    Aria_pagecache_read_requests

    • Description: The number of requests to read something from the Aria page cache.

    • Scope: Global

    • Data Type: numeric

    Aria_pagecache_reads

    • Description: The number of Aria page cache read requests that caused a block to be read from the disk.

    • Scope: Global

    • Data Type: numeric

    Aria_pagecache_write_requests

    • Description: The number of requests to write a block to the Aria page cache.

    • Scope: Global

    • Data Type: numeric

    Aria_pagecache_writes

    • Description: The number of blocks written to disk from the Aria page cache.

    • Scope: Global

    • Data Type: numeric

    Aria_transaction_log_syncs

    • Description: The number of Aria log fsyncs.

    • Scope: Global

    • Data Type: numeric

    This page is licensed: CC BY-SA / Gnu FDL

    Aria storage engine
    Server Status Variables
    SHOW STATUS
    Full list of MariaDB options, system and status variables
    Renaming Maria (the engine)

    So now there was the database, MariaDB, and the storage engine, Maria. To end the confusion this caused, the decision was made to rename the storage engine.

    Monty's first suggestion was to name it Lucy, after his dog, but few who heard it liked that idea. So the decision was made that the next best thing was for the community to suggest and vote on names.

    This was done by running a contest in 2009 through the end of May 2010. After that the best names were voted on by the community and Monty picked and announced the winner (Aria) at OSCon 2010 in Portland.

    The winning entry was submitted by Chris Tooley. He received a Linux-powered System 76 Meerkat NetTop as his prize for suggesting the winning name from Monty Program.

    See Also

    This page is licensed: CC BY-SA / Gnu FDL

    Aria
    MyISAM
    To turn off the doublewrite buffer, set the innodb_doublewrite system variable to 0. This is safe on filesystems that write pages atomically - that is, a page write fully succeeds or fails. But with other filesystems, it is not recommended for production systems. An alternative option is atomic writes. See atomic write support for more details.

    This page is licensed: CC BY-SA / Gnu FDL

    InnoDB
    ODBC
    JDBC
    MYSQL
    TBL

    Aria Group Commit

    Learn about Aria's group commit functionality, which improves performance by batching commit operations to the transaction log.

    The Aria storage engine includes a feature to group commits to speed up concurrent threads doing many inserts into the same or different Aria tables.

    By default, group commit for Aria is turned off. It is controlled by the aria_group_commit and aria_group_commit_interval system variables.

    Information on setting server variables can be found on the Server System Variables page.

    Terminology

    • A commit is flush of logs followed by a sync.

    • sent to disk means written to disk but not sync()ed,

    • flushed mean sent to disk and synced().

    • LSN means log serial number. It's refers to the position in the transaction log.

    Non Group commit logic (aria_group_commit="none")

    The thread which first started the commit is performing the actual flush of logs. Other threads set the new goal (LSN) of the next pass (if it is maximum) and wait for the pass end or just wait for the pass end.

    The effect of this is that a flush (write of logs + sync) will save all data for all threads/transactions that have been waiting since the last flush.

    If hard group commit is enabled (aria_group_commit="hard")

    If hard commit and aria_group_commit_interval=0

    The first thread sends all changed buffers to disk. This is repeated as long as there are new LSNs added. The process can not loop forever because we have a limited number of threads and they will wait for the data to be synced.

    Pseudo code:

    If hard commit and aria_group_commit_interval > 0

    If less than rate microseconds has passed since the last sync, then after buffers have been sent to disk, wait until rate microseconds has passed since last sync, do sync and return. This ensures that if we call sync infrequently we don't do any waits.

    If soft group commit is enabled (aria_group_commit="soft")

    Note that soft group commit should only be used if you can afford to lose a few rows if your machine shuts down hard (as in the case of a power failure).

    Works like in non group commit' but the thread doesn't do any real sync(). If aria_group_commit_interval is not zero, the sync() calls are performed by a service thread with the given rate when needed (new LSN appears). If aria_group_commit_interval is zero, there are no sync() calls.

    Code

    The code for this can be found in storage/maria/ma_loghandler.c::translog_flush().

    This page is licensed: CC BY-SA / Gnu FDL

    Aria Two-step Deadlock Detection

    Explains Aria's deadlock detection mechanism, which uses a two-step process with configurable search depths and timeouts to resolve conflicts.

    Description

    The Aria storage engine can automatically detect and deal with deadlocks (see the Wikipedia deadlocks article).

    This feature is controlled by four configuration variables, two that control the search depth and two that control the timeout.

    • deadlock_search_depth_long

    How it Works

    If Aria is ever unable to obtain a lock, we might have a deadlock. There are two primary ways for detecting if a deadlock has actually occurred. First is to search a wait-for graph (see the ) and the second is to just wait and let the deadlock exhibit itself. Aria Two-step Deadlock Detection does a combination of both.

    First, if the lock request cannot be granted immediately, we do a short search of the wait-for graph with a small search depth as configured by the deadlock_search_depth_short variable. We have a depth limit because the graph can (theoretically) be arbitrarily big and we don't want to recursively search the graph arbitrarily deep. This initial, short search is very fast and most deadlocks are detected right away. If no deadlock cycles are found with the short search the system waits for the amount of time configured in deadlock_timeout_short to see if the lock conflicts are removed and the lock can be granted. Assuming this did not happen and the lock request still waits, the system then moves on to step two, which is a repeat of the process but this time searching deeper using the deadlock_search_depth_long. If no deadlock has been detected, it waits deadlock_timeout_long and times out.

    When a deadlock is detected the system uses a weighting algorithm to determine which thread in the deadlock should be killed and then kills it.

    This page is licensed: CC BY-SA / Gnu FDL

    Using CONNECT - General Information

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    The main characteristic of CONNECT is to enable accessing data scattered on a machine as if it was a centralized database. This, and the fact that locking is not used by connect (data files are open and closed for each query) makes CONNECT very useful for importing or exporting data into or from a MariaDB database and also for all types of Business Intelligence applications. However, it is not suited for transactional applications.

    For instance, the index type used by CONNECT is closer to bitmap indexing than to B-trees. It is very fast for retrieving result but not when updating is done. In fact, even if only one indexed value is modified in a big table, the index is entirely remade (yet this being four to five times faster than for a b-tree index). But normally in Business Intelligence applications, files are not modified so often.

    If you are using CONNECT to analyze files that can be modified by an external process, the indexes are of course not modified by it and become outdated. Use the OPTIMIZE TABLE command to update them before using the tables based on them.

    This means also that CONNECT is not designed to be used by centralized servers, which are mostly used for transactions and often must run a long time without human intervening.

    Performance

    Performances vary a great deal depending on the table type. For instance, ODBC tables are only retrieved as fast as the other DBMS can do. If you have a lot of queries to execute, the best way to optimize your work can be sometime to translate the data from one type to another. Fortunately this is very simple with CONNECT. Fixed formats like FIX, BIN or VEC tables can be created from slower ones by commands such as:

    FIX and BIN are often the better choice because the I/O functions are done on blocks of BLOCK_SIZE rows. VEC tables can be very efficient for tables having many columns only a few being used in each query. Furthermore, for tables of reasonable size, the MAPPED option can very often speed up many queries.

    Create Table statement

    Be aware of the two broad kinds of CONNECT tables:

    Drop Table statement

    For outward tables, the statement just removes the table definition but does not erase the table data. However, dropping an inward tables also erase the table data as well.

    Alter Table statement

    Be careful using the statement. Currently the data compatibility is not tested and the modified definition can become incompatible with the data. In particular, Alter modifies the table definition only but does not modify the table data. Consequently, the table type should not be modified this way, except to correct an incorrect definition. Also adding, dropping or modifying columns may be wrong because the default offset values (when not explicitly given by the FLAG option) may be wrong when recompiled with missing columns.

    Safe use of ALTER is for indexing, as we have seen earlier, and to change options such as MAPPED or HUGE those do not impact the data format but just the way the data file is accessed. Modifying the BLOCK_SIZE option is all right with FIX, BIN, DBF, split VEC tables; however it is unsafe for VEC tables that are not split (only one data file) because at their creation the estimate size has been made a multiple of the block size. This can cause errors if this estimate is not a multiple of the new value of the block size.

    In all cases, it is safer to drop and re-create the table (outward tables) or to make another one from the table that must be modified.

    Update and Delete for File Tables

    CONNECT can execute these commands using two different algorithms:

    • It can do it in place, directly modifying rows (update) or moving rows (delete) within the table file. This is a fast way to do it in particular when indexing is used.

    • It can do it using a temporary file to make the changes. This is required when updating variable record length tables and is more secure in all cases.

    The choice between these algorithms depends on the session variable .

    This page is licensed: GPLv2

    CONNECT Security

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    The use of the CONNECT engine requires the FILE privilege for "outward" tables. This should not be an important restriction. The use of CONNECT "outward" tables on a remote server seems of limited interest without knowing the files existing on it and must be protected anyway. On the other hand, using it on the local client machine is not an issue because it is always possible to create locally a user with the FILE privilege.

    This page is licensed: GPLv2

    Current Status of the CONNECT Handler

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    The CONNECT handler is a GA (stable) release. It was written starting both from an aborted project written for MySQL in 2004 and from the “DBCONNECT” program. It was tested on all the examples described in this document, and is distributed with a set of 53 test cases. Here is a not limited list of future developments:

    1. Adding more table types.

    2. Make more tests files (53 are already made)

    3. Adding more data types, in particular unsigned ones (done for unsigned).

    4. Supporting indexing on nullable and decimal columns.

    5. Adding more optimize tools (block indexing, dynamic indexing, etc.) (done)

    6. Supporting MRR (done)

    7. Supporting partitioning (done)

    8. Getting NOSQL data from the Net as answers from REST queries (done)

    No programs are bug free, especially new ones. Please or documentation errors using the means provided by MariaDB.

    This page is licensed: GPLv2

    Aria Storage Formats

    Understand the different row formats supported by Aria, particularly the default PAGE format which enables crash safety and better concurrency.

    The storage engine supports three different table storage formats.

    These are FIXED, DYNAMIC and PAGE, and they can be set with the ROW FORMAT option in the statement. PAGE is the default format, while FIXED and DYNAMIC are essentially the same as the .

    The statement can be used to see the storage format used by a table.

    Fixed-length

    Fixed-length (or static) tables contain records of a fixed-length. Each column is the same length for all records, regardless of the actual contents. It is the default format if a table has no BLOB, TEXT, VARCHAR or VARBINARY fields, and no ROW FORMAT is provided. You can also specify a fixed table with ROW_FORMAT=FIXED in the table definition.

    Introduction to the CONNECT Engine

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    CONNECT is not just a new “YASE” (Yet another Storage Engine) that provides another way to store data with additional features. It brings a new dimension to MariaDB, already one of the best products to deal with traditional database transactional applications, further into the world of business intelligence and data analysis, including NoSQL facilities. Indeed, BI is the set of techniques and tools for the transformation of raw data into meaningful and useful information. And where is this data?

    "It's amazing in an age where relational databases reign supreme when it comes to managing data that so much information still exists outside RDBMS engines in the form of flat files and other such constructs. In most enterprises, data is passed back and forth between disparate systems in a fashion and speed that would rival the busiest expressways in the world, with much of this data existing in common, delimited files. Target systems intercept these source files and then typically proceed to load them via ETL (extract, transform, load) processes into databases that then utilize the information for business intelligence, transactional functions, or other standard operations. ETL tasks and data movement jobs can consume quite a bit of time and resources, especially if large volumes of data are present that require loading into a database. This being the case, many DBAs welcome alternative means of accessing and managing data that exists in file format."

    CONNECT Table Types - OEM: Implemented in an External LIB

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Although CONNECT provides a rich set of table types, specific applications may need to access data organized in a way that is not handled by its existing foreign data wrappers (FDW). To handle these cases, CONNECT features an interface that enables developers to implement in C++ the required table wrapper and use it as if it were part of the standard CONNECT table type list. CONNECT can use these additional handlers providing the corresponding external module (dll or shared lib) be available.

    To create such a table on an existing handler, use a Create Table statement as shown below.

    The option module gives the name of the DLL or shared library implementing the OEM wrapper for the table type. This library must be located in the plugin directory like all other plugins or UDF’s.

    This library must export a function GetMYTYPE

    Using CONNECT - Importing File Data Into MariaDB Tables

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Directly using external (file) data has many advantages, such as to work on “fresh” data produced for instance by cash registers, telephone switches, or scientific apparatus. However, you may want in some case to import external data into your MariaDB database. This is extremely simple and flexible using the CONNECT handler. For instance, let us suppose you want to import the data of the xsample.xml XML file previously given in example into a table called biblio belonging to the connect database. All you have to do is to create it by:

    This last statement creates the table and inserts the original XML data, translated to tabular format by the xsampall2 CONNECT table, into the MariaDB biblio table. Note that further transformation on the data could have been achieved by using a more elaborate Select statement in the Create statement, for instance using filters, alias or applying functions to the data. However, because the Create Table process copies table data, later modifications of the

    CONNECT VEC Table Type

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Warning: Avoid using this table type in production applications. This file format is specific to CONNECT and may not be supported in future versions.

    Tables of type VEC are binary files that in some cases can provide good performance on read-intensive query workloads. CONNECT organizes their data on disk as columns of values from the same attribute, as opposed to storing it as rows of tabular records. This organization means that when a query needs to access only a few columns of a particular table, only those columns need to be read from disk. Conversely, in a row-oriented table, all values in a table are typically read from disk, wasting I/O bandwidth.

    CONNECT provides two integral VEC formats, in which each column's data is adjacent.

    Inward and Outward Tables

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    There are two broad categories of file-based CONNECT tables. Inward and Outward. They are described below.

    Outward Tables

    Tables containing BLOB or TEXT fields cannot be FIXED, as by design these are both dynamic fields.

    Fixed-length tables have a number of characteristics:

    • fast, since MariaDB will always know where a record begins

    • easy to cache

    • take up more space than dynamic tables, as the maximum amount of storage space are allocated to each record.

    • reconstructing after a crash is uncomplicated due to the fixed positions

    • no fragmentation or need to re-organize, unless records have been deleted and you want to free the space up.

    Dynamic

    Dynamic tables contain records of a variable length. It is the default format if a table has any BLOB, TEXT, VARCHAR or VARBINARY fields, and no ROW FORMAT is provided. You can also specify a DYNAMIC table with ROW_FORMAT=DYNAMIC in the table definition.

    Dynamic tables have a number of characteristics

    • Each row contains a header indicating the length of the row.

    • Rows tend to become fragmented easily. UPDATING a record to be longer will likely ensure it is stored in different places on the disk.

    • All string columns with a length of four or more are dynamic.

    • They require much less space than fixed-length tables.

    • Restoring after a crash is more complicated than with FIXED tables.

    Page

    Page format is the default format for Aria tables, and is the only format that can be used if TRANSACTIONAL=1.

    Page tables have a number of characteristics:

    • It's cached by the page cache, which gives better random performance as it uses fewer system calls.

    • Does not fragment as easily as the DYNAMIC format during UPDATES. The maximum number of fragments are very low.

    • Updates more quickly than dynamic tables.

    • Has a slight storage overhead, mainly notable on very small rows

    • Slower to perform a full table scan

    • Slower if there are multiple duplicated keys, as Aria will first write a row, then keys, and only then check for duplicates

    Transactional

    See Aria Storage Engine for the impact of the TRANSACTIONAL option on the row format.

    This page is licensed: CC BY-SA / Gnu FDL

    Aria
    CREATE TABLE
    MyISAM formats
    SHOW TABLE STATUS
    • Robin Schumacher[1]

    What he describes is known as MED (Management of External Data) enabling the handling of data not stored in a DBMS database as if it were stored in tables. An ISO standard exists that describes one way to implement and use MED in SQL by defining foreign tables for which an external FDW (Foreign Data Wrapper) has been developed in C.

    However, since this was written, a new source of data was developed as the “cloud”. Data are existing worldwide and, in particular, can be obtained in JSON or XML format in answer to REST queries. From Connect 1.06.0010, it is possible to create JSON, XML or CSV tables based on data retrieved from such REST queries.

    MED as described above is a rather complex way to achieve this goal and MariaDB does not support the ISO SQL/MED standard. But, to cover the need, possibly in transactional but mostly in decision support applications, the CONNECT storage engine supports MED in a much simpler way.

    The main features of CONNECT are:

    1. No need for additional SQL language extensions.

    2. Embedded wrappers for many external data types (files, data sources, virtual).

    3. NoSQL query facilities for JSON, XML, HTML files and using JSON UDFs.

    4. NoSQL data obtained from REST queries (requires cpprestsdk).

    5. NoSQL new data type accessing MongoDB collections as relational tables.

    6. Read/Write access to external files of most commonly used formats.

    7. Direct access to most external data sources via ODBC, JDBC and MySQL or MongoDB API.

    8. Only used columns are retrieved from external scan.

    9. Push-down WHERE clauses when appropriate.

    10. Support of special and virtual columns.

    11. Parallel execution of multi-table tables (currently unavailable).

    12. Supports partitioning by sub-files or by sub-tables (enabling table sharding).

    13. Support of MRR for SELECT, UPDATE and DELETE.

    14. Provides remote, block, dynamic and virtual indexing.

    15. Can execute complex queries on remote servers.

    16. Provides an API that allows writing additional FDW in C++.

    With CONNECT, MariaDB has one of the most advanced implementations of MED and NoSQL, without the need for complex additions to the SQL syntax (foreign tables are "normal" tables using the CONNECT engine).

    Giving MariaDB easy and natural access to external data enables the use of all of its powerful functions and SQL-handling abilities for developing business intelligence applications.

    With version 1.07 of CONNECT, retrieving data from REST queries is available in all binary distributed version of MariaDB, and, from 1.07.002, CONNECT allows workspaces greater than 4GB.

    1. ↑ Robin Schumacher is Vice President Products at DataStax and former Director of Product Management at MySQL. He has over 13 years of database experience in DB2, MySQL, Oracle, SQL Server and other database engines.

    . The option subtype enables CONNECT to have the name of the exported function and to use the new table type. Other options are interpreted by the OEM type and can also be specified within the
    option_list
    option.

    Column definitions can be unspecified only if the external wrapper is able to return this information. For this it must export a function ColMYTYPE returning these definitions in a format acceptable by the CONNECT discovery function.

    Which and how options must be specified and the way columns must be defined may vary depending on the OEM type used and should be documented by the OEM type implementer(s).

    An OEM Table Example

    The OEM table REST described in Adding the REST Feature as a Library Called by an OEM Table permits using REST-like tables with MariaDB binary distributions containing but not enabling the REST table type

    Of course, the mongo (dll or so) exporting the GetREST and colREST functions must be available in the plugin directory for all this to work.

    Some Currently Available OEM Table Modules and Subtypes

    Module
    Subtype
    Description

    libhello

    HELLO

    A sample OEM wrapper displaying a one line table saying “Hello world”

    mongo

    MONGO

    Enables using tables based on MongoDB collections.

    Tabfic

    FIC

    Handles files having the Windev HyperFile format.

    Tabofx

    OFC

    Handles Open Financial Connectivity files.

    How to implement an OEM handler is out of the scope of this document.

    This page is licensed: GPLv2

    xsample.xml
    file will not change the
    biblio
    table and changes to the
    biblio
    table will not modify the
    xsample.xml
    file.

    All these can be combined or transformed by further SQL operations. This makes working with CONNECT much more flexible than just using the LOAD statement.

    This page is licensed: GPLv2

    CREATE TABLE biblio ENGINE=myisam SELECT * FROM xsampall2;
    MyISAM
    MyISAM
    Integral vector formats

    In these true vertical formats, the VEC files are made of all the data of the first column, followed by all the data of the second column etc. All this can be in one physical file or each column data can be in a separate file. In the first case, the option max_rows=m, where m is the estimate of the maximum size (number of rows) of the table, must be specified to be able to insert some new records. This leaves an empty space after each column area in which new data can be inserted. In the second case, the “Split” option can be specified[2] at table creation and each column are stored in a file named sequentially from the table file name followed by the rank of the column. Inserting new lines can freely augment such a table.

    Differences between vector formats

    These formats correspond to different needs. The integral vector format provides the best performance gain. It are chosen when the speed of decisional queries must be optimized.

    In the case of a unique file, inserting new data are limited but there will be only one open and close to do. However, the size of the table cannot be calculated from the file size because of the eventual unused space in the file. It must be kept in a header containing the maximum number of rows and the current number of valid rows in the table. To achieve this, specify the option Header=n when creating the table. If n=1 the header are placed at the beginning of the file, if n=2 it are a separate file with the type ‘.blk’, and if n=3 the header are place at the end of the file. This last value is provided because batch inserting is sometimes slower when the header is at the beginning of the file. If not specified, the header option will default to 2 for this table type.

    On the other hand, the "Split" format with separate files have none of these issues, and is a much safer solution when the table must frequently inserted or shared among several users.

    For instance:

    This table, split by default, will have the column values in files vt1.vec and vt2.vec.

    For vector tables, the option block_size=n is used for block reading and writing; however, to have a file made of blocks of equal size, the internal value of the max_rows=m option is eventually increased to become a multiple of n.

    Like for BIN tables, numeric values are stored using platform internal layout, the correspondence between column types and internal format being the same than the default ones given above for BIN. However, field formats are not available for VEC tables.

    Header option

    This applies to VEC tables that are not split. Because the file size depends on the MAX_ROWS value, CONNECT cannot know how many valid records exist in the file. Depending on the value of the HEADER option, this information is stored in a header that can be placed at the beginning of the file, at the end of the file or in a separate file called fn.blk. The valid values for the HEADER option are:

    0

    Defaults to 2 for standard tables and to 3 for inward tables.

    1

    The header is at the beginning of the file.

    2

    The header is in a separate file.

    3

    The header is at the end of the file.

    The value 2 can be used when dealing with files created by another application with no header. The value 3 makes sometimes inserting in the file faster than when the header is at the beginning of the file.

    Note: VEC being a file format specific to CONNECT, no big endian / little endian conversion is provided. These files are not portable between machines using a different byte order setting.

    This page is licensed: CC BY-SA / Gnu FDL

    Tables are "outward" when their file name is specified in the CREATE TABLE statement using the
    file_name
    option.

    Firstly, remember that CONNECT implements MED (Management of External Data). This means that the "true" CONNECT tables – "outward tables" – are based on data that belongs to files that can be produced by other applications or data imported from another DBMS.

    Therefore, their data is "precious" and should not be modified except by specific commands such as INSERT, UPDATE, or DELETE. For other commands such as CREATE, DROP, or ALTER their data is never modified or erased.

    Outward tables can be created on existing files or external tables. When they are dropped, only the local description is dropped, the file or external table is not dropped or erased. Also, DROP TABLE does not erase the indexes.

    ALTER TABLE produces the following warning, as a reminder:

    If the specified file does not exist, it is created when data is inserted into the table. If a SELECT is issued before the file is created, the following error is produced:

    Altering Outward Tables

    When an ALTER TABLE is issued, it just modifies the table definition accordingly without changing the data. ALTER can be used safely to, for instance, modify options such as MAPPED, HUGE or READONLY but with extreme care when modifying column definitions or order options because some column options such as FLAG should also be modified or may become wrong.

    Changing the table type with ALTER often makes no sense. But many suspicious alterations can be acceptable if they are just meant to correct an existing wrong definition.

    Translating a CONNECT table to another engine is fine but the opposite is forbidden when the target CONNECT table is not table based or when its data file exists (because when the target table data cannot be changed and if the source table is dropped, the table data would be lost). However, it can be done to create a new file-based tables when its file does not exist or is void.

    Creating or dropping indexes is accepted because it does not modify the table data. However, it is often unsafe to do it with an ALTER TABLE statement that does other modifications.

    Of course, all changes are acceptable for empty tables.

    Note: Using outward tables requires the FILE privilege.

    Inward Tables

    A special type of file-based CONNECT tables are “inward” tables. They are file-based tables whose file name is not specified in the CREATE TABLE statement (no file_name option).

    Their file are located in the current database directory and their name will default to tablename.type where tablename is the table name and type is the table type folded to lower case. When they are created without using aCREATE TABLE ... SELECT ... statement, an empty file is made at create time and they can be populated by further inserts.

    They behave like tables of other storage engines and, unlike outward CONNECT tables, they are erased when the table is dropped. Of course they should not be read-only to be usable. Even though their utility is limited, they can be used for testing purposes or when the user does not have the FILE privilege.

    Altering Inward Tables

    One thing to know, because CONNECT builds indexes in a specific way, is that all index modifications are done using an "in-place" algorithm – meaning not using a temporary table. This is why, when indexing is specified in an ALTER TABLE statement containing other changes that cannot be done "in-place", the statement cannot be executed and raises an error.

    Converting an inward table to an outward table, using an ALTER TABLE statement specifying a new file name and/or a new table type, is restricted the same way it is when converting a table from another engine to an outward table. However there are no restrictions to convert another engine table to a CONNECT inward table.

    This page is licensed: CC BY-SA / Gnu FDL

    Inward

    They are table whose file name is not specified at create. An empty file are given a default name (tabname.tabtype) and are populated like for other engines. They do not require the FILE privilege and can be used for testing purpose.

    Outward

    They are all other CONNECT tables and access external data sources or files. They are the true useful tables but require the FILE privilege.

    DROP TABLE
    ALTER TABLE
    connect_use_tempfile
    deadlock_search_depth_short
    deadlock_timeout_long
    deadlock_timeout_short
    wait-for graph on Wikipedia

    Storage Engines Overview

    An introduction to MariaDB's pluggable storage engine architecture, highlighting key engines like InnoDB, MyISAM, and Aria for different workloads.

    Overview

    MariaDB features pluggable storage engines to allow per-table workload optimization.

    A storage engine is a type of plugin for MariaDB:

    • Different storage engines may be optimized for different workloads, such as transactional workloads, analytical workloads, or high throughput workloads.

    • Different storage engines may be designed for different use cases, such as federated table access, table sharding, and table archiving in the cloud.

    • Different tables on the same server may use different storage engines.

    Engine
    Target
    Optimization
    Availability

    Examples

    Identify the Default Storage Engine

    Identify the server's global default storage engine by using to query the system variable:

    Identify the session's default storage engine by using :

    Set the Default Storage Engine

    Global default storage engine:

    Session default storage engine supersedes global default during this session:

    Configure the Default Storage Engine

    Identify Available Storage Engines

    Choose Storage Engine for a New Table

    Storage engine is specified at time of table creation using a ENGINE = parameter.

    Resources

    Engines for System Tables

    Standard MariaDB storage engines are used for System Table storage:

    FAQ

    Can I use more than one storage engine on a server?

    • Yes, different tables can use different storage engines on the same server.

    • To create a table with a specific storage engine, specify the ENGINE table option to the statement.

    Can I use more than one storage engine in a single query?

    • Yes, a single query can reference tables that use multiple storage engines.

    • In some cases, special configuration may be required. For example, ColumnStore requires cross engine joins to be configured.

    What storage engine should I use for transactional or OLTP workloads?

    • is the recommended storage engine for transactional or OLTP workloads.

    What storage engine should I use for analytical or OLAP workloads?

    • is the recommended storage engine for analytical or OLAP workloads.

    What storage engine should I use if my application performs both transactional and analytical queries?

    An application that performs both transactional and analytical queries is known as .

    HTAP can be implemented with MariaDB by using for transactional queries and for analytical queries.

    Reference

    MariaDB Server Reference

    • .

    • , which shows available storage engines.

    • , which shows storage engine by table.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    CSV Overview

    The CSV Storage Engine stores data in comma-separated values format text files, making it easy to exchange data with other applications.

    The CSV Storage Engine can read and append to files stored in CSV (comma-separated-values) format.

    The CSV storage engine and logging to tables

    The CSV storage engine is the default storage engine when using logging of SQL queries to tables.

    CSV Storage Engine files

    When you create a table using the CSV storage engine, three files are created:

    • <table_name>.frm

    • <table_name>.CSV

    • <table_name>.CSM

    The .frm file is the table format file.

    The .CSV file is a plain text file. Data you enter into the table is stored as plain text in comma-separated-values format.

    The .CSM file stores metadata about the table such as the state and the number of rows in the table.

    Limitations

    • CSV tables do not support indexing.

    • CSV tables cannot be partitioned.

    • Columns in CSV tables must be declared as NOT NULL.

    • No .

    Examples

    Forgetting to add NOT NULL:

    Creating, inserting and selecting:

    Viewing in a text editor:

    See Also

    This page is licensed: CC BY-SA / Gnu FDL

    Checking and Repairing CSV Tables

    Learn how to use CHECK TABLE and REPAIR TABLE to identify and fix corruptions in CSV tables, discarding rows from the first error onwards.

    CSV tables support the CHECK TABLE and REPAIR TABLE statements.

    CHECK TABLE marks the table as corrupt if it finds a problem, while REPAIR TABLE restores rows until the first corrupted row, discarding the rest.

    Examples

    CREATE TABLE csv_test (
      x INT NOT NULL, y DATE NOT NULL, z CHAR(10) NOT NULL
      ) ENGINE=CSV;
    
    INSERT INTO csv_test VALUES
        (1,CURDATE(),'one'),
        (2,CURDATE(),'two'),
        (3,CURDATE(),'three');
    SELECT * FROM csv_test;
    +---+------------+-------+
    | x | y          | z     |
    +---+------------+-------+
    | 1 | 2013-07-08 | one   |
    | 2 | 2013-07-08 | two   |
    | 3 | 2013-07-08 | three |
    +---+------------+-------+

    Using an editor, the actual file will look as follows

    Let's introduce some corruption with an unwanted quote in the 2nd row:

    We can repair this, but all rows from the corrupt row onwards are lost:

    This page is licensed: CC BY-SA / Gnu FDL

    Binary Log Group Commit and InnoDB Flushing Performance

    Understand how group commit works with InnoDB to improve performance by reducing the number of disk syncs required during transaction commits.

    Overview

    When both innodb_flush_log_at_trx_commit=1 (the default) is set and the binary log is enabled, there is now one less sync to disk inside InnoDB during commit (2 syncs shared between a group of transactions instead of 3).

    Durability of commits is not decreased — this is because even if the server crashes before the commit is written to disk by InnoDB, it are recovered from the binary log at next server startup (and it is guaranteed that sufficient information is synced to disk so that such a recovery is always possible).

    Switching to Old Flushing Behavior

    The old behavior, with 3 syncs to disk per (group) commit (and consequently lower performance), can be selected with the new option. There is normally no benefit to doing this, however there are a couple of edge cases to be aware of.

    Non-durable Binary Log Settings

    If is set and the is enabled, but is set, then commits are not guaranteed durable inside InnoDB after commit. This is because if is set and if the server crashes, then transactions that were not flushed to the binary log prior to the crash are missing from the binary log.

    In this specific scenario, can be set to ensure that transactions are durable in InnoDB, even if they are not necessarily durable from the perspective of the binary log.

    One should be aware that if is set, then a crash is nevertheless likely to cause transactions to be missing from the binary log. This will cause the binary log and InnoDB to be inconsistent with each other. This is also likely to cause any to become inconsistent, since transactions are replicated through the . Thus it is recommended to set . With the improvements introduced in , this setting has much less penalty in recent versions compared to older versions of MariaDB and MySQL.

    Recent Transactions Missing from Backups

    and only see transactions that have been flushed to the . With the improvements, there may be a small delay (defined by the system variable) between when a commit happens and when the commit are included in a backup.

    Note that the backup will still be fully consistent with itself and the . This problem is normally not an issue in practice. A backup usually takes a long time to complete (relative to the 1 second or so that is normally set to), and a backup usually includes a lot of transactions that were committed during the backup. With this in mind, it is not generally noticeable if the backup does not include transactions that were committed during the last 1 second or so of the backup process. It is just mentioned here for completeness.

    This page is licensed: CC BY-SA / Gnu FDL

    InnoDB Lock Modes

    InnoDB employs Row-Level Locking with Shared (S) and Exclusive (X) locks, along with Intention locks, to manage concurrent transaction access.

    Locks are acquired by a transaction to prevent concurrent transactions from modifying, or even reading, some rows or ranges of rows. This is done to make sure that concurrent write operations never collide.

    InnoDB supports a number of lock modes.

    Shared and Exclusive Locks

    The two standard row-level locks are share locks(S) and exclusive locks(X).

    A shared lock is obtained to read a row, and allows other transactions to read the locked row, but not to write to the locked row. Other transactions may also acquire their own shared locks.

    An exclusive lock is obtained to write to a row, and stops other transactions from locking the same row. It's specific behavior depends on the ; the default (REPEATABLE READ), allows other transactions to read from the exclusively locked row.

    Intention Locks

    InnoDB also permits table locking, and to allow locking at both table and row level to co-exist gracefully, a series of locks called intention locks exist.

    An intention shared lock(IS) indicates that a transaction intends to set a shared lock.

    An intention exclusive lock(IX) indicates that a transaction intends to set an exclusive lock.

    Whether a lock is granted or not can be summarised as follows:

    • An X lock is not granted if any other lock (X, S, IX, IS) is held.

    • An S lock is not granted if an X or IX lock is held. It is granted if an S or IS lock is held.

    • An IX lock is not granted if in X or S lock is held. It is granted if an IX or IS lock is held.

    • An IS lock is not granted if an X lock is held. It is granted if an S, IX or IS lock is held.

    AUTO_INCREMENT Locks

    Locks are also required for auto-increments - see .

    Gap Locks

    With the default , REPEATABLE READ, and, until , the default setting of the variable, a method called gap locking is used. When InnoDB sets a shared or exclusive lock on a record, it's actually on the index record. Records will have an internal InnoDB index even if they don't have a unique index defined. At the same time, a lock is held on the gap before the index record, so that another transaction cannot insert a new index record in the gap between the record and the preceding record.

    The gap can be a single index value, multiple index values, or not exist at all depending on the contents of the index.

    If a statement uses all the columns of a unique index to search for unique row, gap locking is not used.

    Similar to the shared and exclusive intention locks described above, there can be a number of types of gap locks. These include the shared gap lock, exclusive gap lock, intention shared gap lock and intention exclusive gap lock.

    Gap locks are disabled if the system variable is set (until ), or the is set to READ COMMITTED.

    This page is licensed: CC BY-SA / Gnu FDL

    InnoDB Monitors

    InnoDB Monitors, such as the Standard, Lock, and Tablespace monitors, provide detailed internal state information to the error log for diagnostics.

    The InnoDB Monitor refers to particular kinds of monitors included in MariaDB and since the early versions of MySQL.

    There are four types: the standard InnoDB monitor, the InnoDB Lock Monitor, InnoDB Tablespace Monitor and the InnoDB Table Monitor.

    Standard InnoDB Monitor

    The standard InnoDB Monitor returns extensive InnoDB information, particularly lock, semaphore, I/O and buffer activity:

    To enable the standard InnoDB Monitor, from , set the innodb_status_output system variable to 1. Before , running the following statement was the method used:

    To disable the standard InnoDB monitor, either set the system variable to zero, or, before , drop the table

    The CREATE TABLE and DROP TABLE method of enabling and disabling the InnoDB Monitor has been deprecated, and may be removed in a future version of MariaDB.

    For a description of the output, see .

    InnoDB Lock Monitor

    The InnoDB Lock Monitor displays additional lock information.

    To enable the InnoDB Lock Monitor, the standard InnoDB monitor must be enabled. Then, from , set the system variable to 1. Before , running the following statement was the method used:

    To disable the standard InnoDB monitor, either set the system variable to zero, or, before , drop the table

    The CREATE TABLE and DROP TABLE method of enabling and disabling the InnoDB Lock Monitor has been deprecated, and may be removed in a future version of MariaDB.

    InnoDB Tablespace Monitor

    The InnoDB Tablespace Monitor is deprecated, and may be removed in a future version of MariaDB.

    Enabling the Tablespace Monitor outputs a list of file segments in the shared tablespace to the error log, and validates the tablespace allocation data structures.

    To enable the Tablespace Monitor, run the following statement:

    To disable it, drop the table:

    InnoDB Table Monitor

    The InnoDB Table Monitor is deprecated, and may be removed in a future version of MariaDB.

    Enabling the Table Monitor outputs the contents of the InnoDB internal data dictionary to the error log every fifteen seconds.

    To enable the Table Monitor, run the following statement:

    To disable it, drop the table:

    SHOW ENGINE INNODB STATUS

    The statement can be used to obtain the standard InnoDB Monitor output when required, rather than sending it to the error log. It will also display the InnoDB Lock Monitor information if the system variable is set to 1.

    This page is licensed: CC BY-SA / Gnu FDL

    MariaDB Enterprise Server InnoDB Background Thread Pool

    This page details the dedicated thread pool in MariaDB Enterprise Server that manages InnoDB background tasks, improving scalability and performance.

    Overview

    Starting with MariaDB Enterprise Server 10.5 and MariaDB Community Server 10.5, InnoDB uses the InnoDB Background Thread Pool to perform internal operations in the background. In previous versions, the internal operations were performed by dedicated threads. By using the InnoDB Background Thread Pool instead of many dedicated threads, InnoDB can reduce context switching and use system resources more effectively.

    The InnoDB Background Thread Pool performs internal operations in multiple categories: tasks, timers, and asynchronous I/O.

    Tasks are used to perform internal operations that are triggered by some event. In ES 10.5 and later and CS 10.5 and later, the following threads have been replaced by tasks with the InnoDB Background Thread Pool:

    • The InnoDB Buffer Pool Resize Thread

    • The InnoDB Buffer Pool Dump Thread

    • The InnoDB Full-Text Search (FTS) Optimization Thread

    Timers are used to perform internal operations that are triggered periodically. In ES 10.5 and later and CS 10.5 and later, the following threads have been replaced by timers with the InnoDB Background Thread Pool:

    • The InnoDB Master Thread

    • The InnoDB Defragmentation Thread

    • The InnoDB Monitor Thread

    • The InnoDB Error Monitor Thread

    Asynchronous I/O is used to read from and write to disk asynchronously. In ES 10.5 and later and CS 10.5 and later, the following threads have been replaced by asynchronous I/O with the InnoDB Background Thread Pool:

    • The

    Feature Summary

    Feature
    Detail
    Resources

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    ARCHIVE

    The Archive storage engine is optimized for high-speed insertion and compression of large amounts of data, suitable for logging and auditing.

    The ARCHIVE storage engine is a storage engine that uses gzip to compress rows. It is mainly used for storing large amounts of data, without indexes, with only a very small footprint.

    A table using the ARCHIVE storage engine is stored in two files on disk. There's a table definition file with an extension of .frm, and a data file with the extension .ARZ. At times during optimization, a .ARN file will appear.

    New rows are inserted into a compression buffer and are flushed to disk when needed. SELECTs cause a flush. Sometimes, rows created by multi-row inserts are not visible until the statement is complete.

    ARCHIVE allows a maximum of one key. The key must be on an column, and can be a PRIMARY KEY or a non-unique key. However, it has a limitation: it is not possible to insert a value which is lower than the next AUTO_INCREMENT value.

    CONNECT

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Note: You can download a PDF version of the CONNECT documentation (1.7.0003):

    Connect Version
    Introduced
    Maturity

    CONNECT PROXY Table Type

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    A PROXY table is a table that accesses and reads the data of another table or view. To create a table based on the boys FIX table:

    Simply, PROXY being the default type when TABNAME is specified:

    Because the boys table can be directly used, what can be the use of a proxy table? Well, its main use is to be internally used by other table types such as

    InnoDB Data Scrubbing

    This feature allows for the secure deletion of data by overwriting deleted records in tablespaces and logs to prevent data recovery.

    Most of the background and redo log scrubbing code has been removed in . See and .

    Sometimes there is a requirement that when some data is deleted, it is really gone. This might be the case when one stores user's personal information or some other sensitive data. Normally though, when a row is deleted, the space is only marked as free on the page. It may eventually be overwritten, but there is no guarantee when that will happen. A copy of the deleted rows may also be present in the log files.

    Support for data scrubbing: Background threads periodically scan tablespaces and logs and remove all data that should be deleted. The number of background threads for tablespace scans is set by . Log scrubbing happens in a separate thread.

    To configure scrubbing one can use the following variables:

    InnoDB File Format

    Learn about the different file formats supported by InnoDB, such as Antelope and Barracuda, and how they impact table features and storage.

    Setting a Table's File Format

    A table's tablespace is tagged with the lowest InnoDB file format that supports the table's . So, even if the Barracuda file format is enabled, tables that use the COMPACT or REDUNDANT row formats are tagged with the Antelope file format in the table.

    MariaDB Enterprise Server InnoDB I/O Threads

    Learn about the specialized I/O threads in MariaDB Enterprise Server's InnoDB engine that handle asynchronous read and write operations efficiently.

    Overview

    Starting with MariaDB Enterprise Server 10.5 and MariaDB Community Server 10.5, the InnoDB I/O Threads were replaced by the asynchronous I/O functionality in the .

    CREATE TABLE xtab (COLUMN definitions)
    ENGINE=CONNECT table_type=OEM MODULE='libname'
    subtype='MYTYPE' [standard table options]
    Option_list='Myopt=foo';
    CREATE TABLE vtab (
    a INT NOT NULL,
    b CHAR(10) NOT NULL)
    ENGINE=CONNECT table_type=VEC file_name='vt.vec';
    Warning (Code 1105): This is an outward table, table data were not modified.
    Warning (Code 1105): Open(rb) error 2 on <file_path>: No such file or directory
    do
       send changed buffers to disk
     while new_goal
    sync
    CREATE TABLE fastable table_specs SELECT * FROM slowtable;
    mysqld --log-output=table
    $ cat csv_test.CSV
    1,"2013-07-08","one"
    2,"2013-07-08","two"
    3,"2013-07-08","three"
    CREATE TABLE innodb_monitor (a INT) ENGINE=INNODB;

    Tabofx

    QIF

    Handles Quicken Interchange Format files.

    Cirpack

    CRPK

    Handles CDR's from Cirpack UTP's.

    Tabplg

    PLG

    Access tables from the PlugDB DBMS.

    Thread Pool

    InnoDB Background Thread Pool

    Storage Engine

    InnoDB

    Purpose

    Handles background tasks for InnoDB

    Availability

    • ES 10.5+ • CS 10.5+

    MariaDB Enterprise Server

    InnoDB I/O Threads

    CONNECT Table Types

    The CONNECT storage engine has been deprecated.

    1,"2013-07-08","one"
    2","2013-07-08","two"
    3,"2013-07-08","three"
    CHECK TABLE csv_test;
    +---------------+-------+----------+----------+
    | Table         | Op    | Msg_type | Msg_text |
    +---------------+-------+----------+----------+
    | test.csv_test | check | error    | Corrupt  |
    +---------------+-------+----------+----------+
    REPAIR TABLE csv_test;
    +---------------+--------+----------+----------------------------------------+
    | Table         | Op     | Msg_type | Msg_text                               |
    +---------------+--------+----------+----------------------------------------+
    | test.csv_test | repair | Warning  | Data truncated for column 'x' at row 2 |
    | test.csv_test | repair | status   | OK                                     |
    +---------------+--------+----------+----------------------------------------+
    
    SELECT * FROM csv_test;
    +---+------------+-----+
    | x | y          | z   |
    +---+------------+-----+
    | 1 | 2013-07-08 | one |
    +---+------------+-----+

    Cache, Temp

    Temporary Data

    ES 10.5+

    Reads

    Reads

    ES 10.5+

    Write-Heavy

    I/O Reduction, SSD

    ES 10.5+

    Cloud

    Read-Only

    ES 10.5+

    Federation

    Sharding, Interlink

    ES 10.5+

    Aria

    Read-Heavy

    Reads

    ES 10.5+

    Analytics, HTAP

    Big Data, Analytical

    ES 10.5+

    InnoDB

    General Purpose

    Mixed Read/Write

    SHOW GLOBAL VARIABLES
    default_storage_engine
    SHOW SESSION VARIABLES
    Aria Storage Engine
    MyISAM Storage Engine
    CREATE TABLE
    InnoDB
    hybrid transactional-analytical processing (HTAP)
    InnoDB
    Plugins
    Information Schema ENGINES table
    Information Schema TABLES table

    ES 10.5+

    The original CSV-format does not enable IETF-compatible parsing of embedded quote and comma characters. From , it is possible to do so by setting the IETF_QUOTES option when creating a table.
    transactions
    Checking and Repairing CSV Tables
    Installing the Plugin

    Although the plugin's shared library is distributed with MariaDB by default, the plugin is not actually installed by MariaDB by default. There are two methods that can be used to install the plugin with MariaDB.

    The first method can be used to install the plugin without restarting the server. You can install the plugin dynamically by executing INSTALL SONAME or INSTALL PLUGIN:

    The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the --plugin-load or the --plugin-load-add options. This can be specified as a command-line argument to mysqld or it can be specified in a relevant server option group in an option file:

    Uninstalling the Plugin

    You can uninstall the plugin dynamically by executing UNINSTALL SONAME or UNINSTALL PLUGIN:

    If you installed the plugin by providing the --plugin-load or the --plugin-load-add options in a relevant server option group in an option file, then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.

    Characteristics

    • Supports INSERT and SELECT, but not DELETE, UPDATE or REPLACE.

    • Data is compressed with zlib as it is inserted, making it very small.

    • Data is slow the select, as it needs to be uncompressed, and, besides the query cache, there is no cache.

    • Supports AUTO_INCREMENT (since MariaDB/MySQL 5.1.6), which can be a unique or a non-unique index.

    • Since MariaDB/MySQL 5.1.6, selects scan past BLOB columns unless they are specifically requested, making these queries much more efficient.

    • Does not support data types.

    • Does not support .

    • Does not support foreign keys.

    • Does not support .

    • No storage limit.

    • Supports row locking.

    • Supports , and the server can access ARCHIVE tables even if the corresponding .frm file is missing.

    • and can be used to compress the table in its entirety, resulting in slightly better compression.

    • With MariaDB, it is possible to upgrade from the MySQL 5.0 format without having to dump the tables.

    • is supported.

    • Running many SELECTs during the insertions can deteriorate the compression, unless only multi-rows INSERTs and INSERT DELAYED are used.

    AUTO_INCREMENT

    Stable

    Connect 1.07.0001

    , , ,

    Stable

    Connect 1.06.0010

    , ,

    Stable

    Connect 1.06.0007

    , ,

    Stable

    Connect 1.06.0005

    , ,

    Stable

    Connect 1.06.0004

    , ,

    Stable

    Connect 1.06.0001

    , ,

    Beta

    Connect 1.05.0003

    , ,

    Stable

    Connect 1.05.0001

    ,

    Stable

    Connect 1.04.0008

    ,

    Stable

    Connect 1.04.0006

    , ,

    Stable

    Connect 1.04.0005

    Beta

    Connect 1.04.0003

    Beta

    The CONNECT storage engine enables MariaDB to access external local or remote data (MED). This is done by defining tables based on different data types, in particular files in various formats, data extracted from other DBMS or products (such as Excel or MongoDB) via ODBC or JDBC, or data retrieved from the environment (for example DIR, WMI, and MAC tables)

    This storage engine supports table partitioning, MariaDB virtual columns and permits defining_special_ columns such as ROWID, FILEID, and SERVID.

    No precise definition of maturity exists. Because CONNECT handles many table types, each type has a different maturity depending on whether it is old and well-tested, less well-tested or newly implemented. This is indicated for all data types.

    Connect 1.07.0002

    1MB
    connect_1_7_03.pdf
    PDF
    Open

    , , ,

    ,
    ,
    , or
    . Sure enough, PROXY tables are CONNECT tables, meaning that they can be based on tables of any engines and accessed by table types that need to access CONNECT tables.

    Proxy on non-CONNECT Tables

    When the sub-table is a view or not a CONNECT table, CONNECT internally creates a temporary CONNECT table of MYSQL type to access it. This connection uses the same default parameters as for a MYSQL table. It is also possible to specify them to the PROXY table using in the PROXY declaration the sameOPTION_LIST options as for a MYSQL table. Of course, it is simpler and more natural to use directly the MYSQL type in this case.

    Normally, the default parameters should enable the PROXY table to reconnect the server. However, an issue is when the current user was logged using a password. The security protocol prevents CONNECT to retrieve this password and requires it to be given in the PROXY table create statement. For instance adding to it:

    However, it is often not advisable to write in clear a password that can be seen by all user able to see the table declaration by show create table, in particular, if the table is used when the current user is root. To avoid this, a specific user should be created on the local host that are used by proxy tables to retrieve local tables. This user can have minimum grant options, for instance SELECT on desired directories, and needs no password. Supposing ‘proxy’ is such a user, the option list to add are:

    Using a PROXY Table as a View

    A PROXY table can also be used by itself to modify the way a table is viewed. For instance, a proxy table does not use the indexes of the object table. It is also possible to define its columns with different names or type, to use only some of them or to changes their order. For instance:

    This will display:

    city
    boy
    birth

    Boston

    John

    1986-01-25

    Boston

    Henry

    1987-06-07

    San Jose

    George

    1981-08-10

    Chicago

    Sam

    1979-11-22

    Here we did not have to specify column format or offset because data are retrieved from the boys table, not directly from the boys.txt file. The flag option of the boy column indicates that it correspond to the first column of the boys table, the name column.

    Avoiding PROXY table loop

    CONNECT is able to test whether a PROXY, or PROXY-based, table refers directly or indirectly to itself. If a direct reference can tested at table creation, an indirect reference can only be tested when executing a query on the table. However, this is possible only for local tables. When using remote tables or views, a problem can occur if the remote table or the view refers back to one of the local tables of the chain. The same caution should be used than when using FEDERATEDX tables.

    Note: All PROXY or PROXY-based tables are read-only in this version.

    Modifying Operations

    All INSERT / UPDATE / DELETE operations can be used with proxy tables. However, the same restrictions applying to the source table also apply to the proxy table.

    Note: All PROXY and PROXY-based table types are not indexable.

    This page is licensed: CC BY-SA / Gnu FDL

    TBL
    XCOL
    OCCUR
    PIVOT

    Seconds

    Check at this intervall if tablespaces needs scrubbing. Deprecated and ignored.

    Boolean

    Enable scrubbing of compressed data by background threads. Deprecated and ignored.

    Seconds

    Scrub spaces that were last scrubbed longer than this many seconds ago. Deprecated and ignored.

    Boolean

    Enable scrubbing of uncompressed data by background threads. Deprecated and ignored.

    Boolean

    Enable scrubbing of uncompressed data.

    Redo log scrubbing did not fully work as intended, and was deprecated and ignored in (MDEV-21870). If old log contents should be kept secret, enabling innodb_encrypt_log or setting a smaller innodb_log_file_size could help.

    The Information Schema INNODB_TABLESPACES_SCRUBBING table contains scrubbing information.

    Thanks

    • Scrubbing was donated to the MariaDB project by Google.

    This page is licensed: CC BY-SA / Gnu FDL

    MDEV-15528
    MDEV-21870
    InnoDB
    innodb-encryption-threads
    Supported File Formats

    The InnoDB storage engine supports two different file formats:

    • Antelope

    • Barracuda

    Antelope

    In and before, the default file format is Antelope. In and later, the Antelope file format is deprecated.

    Antelope is the original InnoDB file format. It supports the COMPACT and REDUNDANT row formats, but not the DYNAMIC or COMPRESSED row formats.

    Barracuda

    In and before, the Barracuda file format is only supported if the innodb_file_per_table system variable is set to ON. In and later, the default file format is Barracuda and Antelope is deprecated.

    Barracuda is a newer InnoDB file format. It supports the COMPACT, REDUNDANT, DYNAMIC and COMPRESSED row formats. Tables with large BLOB or TEXT columns in particular could benefit from the dynamic row format.

    Future Formats

    InnoDB might use new file formats in the future. Each format will have an identifier from 0 to 25, and a name. The names have already been decided, and are animal names listed in an alphabetical order: Antelope, Barracuda, Cheetah, Dragon, Elk, Fox, Gazelle, Hornet, Impala, Jaguar, Kangaroo, Leopard, Moose, Nautilus, Ocelot, Porpoise, Quail, Rabbit, Shark, Tiger, Urchin, Viper, Whale, Xenops, Yak and Zebra.

    Checking a Table's File Format.

    The information_schema.INNODB_SYS_TABLES table can be queried to see the file format used by a table.

    A table's tablespace is tagged with the lowest InnoDB file format that supports the table's row format. So, even if the Barracuda file format is enabled, tables that use the COMPACT or REDUNDANT row formats are tagged with the Antelope file format in the information_schema.INNODB_SYS_TABLES table.

    Compatibility

    Each tablespace is tagged with the id of the most recent file format used by one of its tables. All versions of InnoDB can read tables that use an older file format. However, it can not read from more recent formats. For this reason, each time InnoDB opens a table it checks the tablespace's format, and returns an error if a newer format is used.

    This check can be skipped via the innodb_file_format_check variable. Beware that, is InnoDB tries to repair a table in an unknown format, the table are corrupted! This happens on restart if innodb_file_format_check is disabled and the server crashed, or it was closed with fast shutdown.

    To downgrade a table from the Barracuda format to Antelope, the table's ROW_FORMAT can be set to a value supported by Antelope, via an ALTER TABLE statement. This recreates the indexes.

    The Antelope format can be used to make sure that tables work on MariaDB and MySQL servers which are older than 5.5.

    See Also

    • InnoDB Storage Formats

    This page is licensed: CC BY-SA / Gnu FDL

    row format
    information_schema.INNODB_SYS_TABLES
    Feature Summary
    Feature
    Detail
    Resources

    Thread

    InnoDB I/O Threads

    Storage Engine

    InnoDB

    Purpose

    Reading data from disk / Writing data to disk

    Availability

    Basic Configuration

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    InnoDB Background Thread Pool
    SHOW ENGINE INNODB STATUS
    innodb_status_output_locks
    SHOW ENGINE INNODB STATUS
    innodb_status_output_locks
    MONGO
    innodb_flush_log_at_trx_commit=3
    innodb_flush_log_at_trx_commit=1
    binary log
    sync_binlog=0
    sync_binlog=0
    innodb_flush_log_at_trx_commit=3
    sync_binlog=0
    replication slaves
    binary log
    sync_binlog=1
    group commit
    mariadb-backup
    Percona XtraBackup
    redo log
    group commit
    binlog_commit_wait_usec
    binary log
    binlog_commit_wait_usec
    isolation level
    AUTO_INCREMENT handling in InnoDB
    isolation level
    innodb_locks_unsafe_for_binlog
    innodb_locks_unsafe_for_binlog
    isolation level

    Choosing the Right Storage Engine

    A guide to selecting the appropriate storage engine based on data needs, comparing features of general-purpose, columnar, and specialized engines.

    A high-level overview of the main reasons for choosing a particular storage engine:

    Topic List

    General Purpose

    • is a good general transaction storage engine, and the best choice in most cases. It is the default storage engine.

    • , MariaDB's more modern improvement on , has a small footprint and allows for easy copying between systems.

    • has a small footprint and allows for easy copying between systems. MyISAM is MySQL's oldest storage engine. There is usually little reason to use it except for legacy purposes. Aria is MariaDB's more modern improvement.

    • is no longer available. It was a performance-enhanced fork of InnoDB and was MariaDB's default engine until

    Scaling, Partitioning

    When you want to split your database load on several servers or optimize for scaling. We also suggest looking at , a synchronous multi-master cluster.

    • uses partitioning to provide data sharding through multiple servers.

    • utilizes a massively parallel distributed data architecture and is designed for big data scaling to process petabytes of data.

    • The storage engine is a collection of identical tables that can be used as one. "Identical" means that all tables have identical column and index information.

    Compression / Archive

    • enables greater compression than InnoDB, as well as less write amplification giving better endurance of flash storage and improving overall throughput.

    • The storage engine is, unsurprisingly, best used for archiving.

    Connecting to Other Data Sources

    When you want to use data not stored in a MariaDB database.

    • The storage engine can read and append to files stored in CSV (comma-separated-values) format. However, since MariaDB 10.0, CONNECT is a better choice and is more flexibly able to read and write such files.

    Search Optimized

    Search engines optimized for search.

    • is used as a proxy to run statements on a remote Sphinx database server (mainly useful for advanced fulltext searches).

    • provides fast CJK-ready full text searching using column store.

    Cache, Read-only

    • does not write data on-disk (all rows are lost on crash) and is best-used for read-only caches of data from other tables, or for temporary work areas. With the default and other storage engines having good caching, there is less need for this engine than in the past.

    Other Specialized Storage Engines

    • is a read-only storage engine that stores its data in Amazon S3.

    • allows the creation of ascending or descending sequences of numbers (positive integers) with a given starting value, ending value and increment, creating virtual, ephemeral tables automatically when you need them.

    • The storage engine accepts data but does not store it and always returns an empty result. This can be useful in environments, for example, if you want to run complex filtering rules on a slave without incurring any overhead on a master.

    Alphabetical List

    • The storage engine is, unsurprisingly, best used for archiving.

    • , MariaDB's more modern improvement on MyISAM, has a small footprint and allows for easy copy between systems.

    • The storage engine accepts data but does not store it and always returns an empty result. This can be useful in environments, for example, if you want to run complex filtering rules on a slave without incurring any overhead on a master.

    This page is licensed: CC BY-SA / Gnu FDL

    BLACKHOLE

    The BLACKHOLE storage engine discards all data written to it but records operations in the binary log, useful for replication filtering and testing.

    The BLACKHOLE storage engine accepts data but does not store it and always returns an empty result.

    A table using the BLACKHOLE storage engine consists of a single .frm table format file, but no associated data or index files.

    This storage engine can be useful, for example, if you want to run complex filtering rules on a slave without incurring any overhead on a master. The master can run a BLACKHOLE storage engine, with the data replicated to the slave for processing.

    Installing the Plugin

    Although the plugin's shared library is distributed with MariaDB by default, the plugin is not actually installed by MariaDB by default. There are two methods that can be used to install the plugin with MariaDB.

    The first method can be used to install the plugin without restarting the server. You can install the plugin dynamically by executing or :

    The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the or the options. This can be specified as a command-line argument to or it can be specified in a relevant server in an :

    Uninstalling the Plugin

    You can uninstall the plugin dynamically by executing or :

    If you installed the plugin by providing the or the options in a relevant server in an , then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.

    Using the BLACKHOLE Storage Engine

    Using with DML

    , , and statements all work with the BLACKHOLE storage engine. However, no data changes are actually applied.

    Using with Replication

    If the binary log is enabled, all SQL statements are logged as usual, and replicated to any slave servers. However, since rows are not stored, it is important to use statement-based rather than the row or mixed format, as and statements are neither logged nor replicated. See .

    Using with Triggers

    Some work with the BLACKHOLE storage engine.

    BEFORE for statements are still activated.

    for and statements are not activated.

    with the FOR EACH ROW clause do not apply, since the tables have no rows.

    Using with Foreign Keys

    Foreign keys are not supported. If you convert an table to BLACKHOLE, then the foreign keys will disappear. If you convert the same table back to InnoDB, then you will have to recreate them.

    Using with Virtual Columns

    If you convert an table which contains to BLACKHOLE, then it produces an error.

    Using with AUTO_INCREMENT

    Because a BLACKHOLE table does not store data, it will not maintain the value. If you are replicating to a table that can handle AUTO_INCREMENT columns, and are not explicitly setting the primary key auto-increment value in the query, or using the statement, inserts will fail on the slave due to duplicate keys.

    Limits

    The maximum key size is:

    • 3500 bytes (>= , , , and )

    • 1000 bytes (<= , , , and ).

    Examples

    This page is licensed: CC BY-SA / Gnu FDL

    Installing CONNECT

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    The CONNECT storage engine enables MariaDB to access external local or remote data (MED). This is done by defining tables based on different data types, in particular files in various formats, data extracted from other DBMS or products (such as Excel or MongoDB) via ODBC or JDBC, or data retrieved from the environment (for example DIR, WMI, and MAC tables)

    This storage engine supports table partitioning, MariaDB virtual columns and permits defining special columns such as ROWID, FILEID, and SERVID.

    The storage engine must be installed before it can be used.

    Installing the Plugin's Package

    The CONNECT storage engine's shared library is included in MariaDB packages as the ha_connect.so or ha_connect.so shared library on systems where it can be built.

    Installing on Linux

    The CONNECT storage engine is included in on Linux.

    Installing with a Package Manager

    The CONNECT storage engine can also be installed via a package manager on Linux. In order to do so, your system needs to be configured to install from one of the MariaDB repositories.

    You can configure your package manager to install it from MariaDB Corporation's MariaDB Package Repository by using the .

    You can also configure your package manager to install it from MariaDB Foundation's MariaDB Repository by using the .

    Installing with yum/dnf

    On RHEL, CentOS, Fedora, and other similar Linux distributions, it is highly recommended to install the relevant from MariaDB's repository using or . Starting with RHEL 8 and Fedora 22, yum has been replaced by dnf, which is the next major version of yum. However, yum commands still work on many systems that use dnf:

    Installing with apt-get

    On Debian, Ubuntu, and other similar Linux distributions, it is highly recommended to install the relevant from MariaDB's repository using :

    Installing with zypper

    On SLES, OpenSUSE, and other similar Linux distributions, it is highly recommended to install the relevant from MariaDB's repository using :

    Installing the Plugin

    Once the shared library is in place, the plugin is not actually installed by MariaDB by default. There are two methods that can be used to install the plugin with MariaDB.

    The first method can be used to install the plugin without restarting the server. You can install the plugin dynamically by executing or :

    The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the or the options. This can be specified as a command-line argument to or it can be specified in a relevant server in an :

    Uninstalling the Plugin

    You can uninstall the plugin dynamically by executing or :

    If you installed the plugin by providing the or the options in a relevant server in an , then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.

    Installing Dependencies

    The CONNECT storage engine has some external dependencies.

    Installing unixODBC

    The CONNECT storage engine requires an ODBC library. On Unix-like systems, that usually means installing . On some systems, this is installed as the unixODBC package:

    On other systems, this is installed as the libodbc1 package:

    If you do not have the ODBC library installed, then you may get an error about a missing library when you attempt to install the plugin:

    See Also

    This page is licensed: GPLv2

    CONNECT - External Table Types

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Because so many ODBC and JDBC drivers exist and only the main ones have been heavily tested, these table types cannot be ranked as stable. Use them with care in production applications.

    These types can be used to access tables belonging to the current or another database server. Six types are currently provided:

    ODBC: To be used to access tables from a database management system providing an ODBC connector. ODBC is a standard of Microsoft and is currently available on Windows. On Linux, it can also be used provided a specific application emulating ODBC is installed. Currently only unixODBC is supported.

    JDBC: To be used to access tables from a database management system providing a JDBC connector. JDBC is an Oracle standard implemented in Java and principally meant to be used by Java applications. Using it directly from C or C++ application seems to be almost impossible due to an Oracle bug still not fixed. However, this can be achieved using a Java wrapper class used as an interface between C++ and JDBC. On another hand, JDBC is available on all platforms and operating systems.

    : To access MongoDB collections as tables via their MongoDB C Driver. Because this requires both MongoDB and the C Driver to be installed and operational, this table type is not currently available in binary distributions but only when compiling MariaDB from source.

    : This type is the preferred way to access tables belonging to another MySQL or MariaDB server. It uses the MySQL API to access the external table. Even though this can be obtained using the FEDERATED(X) plugin, this specific type is used internally by CONNECT because it also makes it possible to access tables belonging to the current server.

    : Internally used by some table types to access other tables from one table.

    External Table Specification

    The four main external table types – odbc, jdbc, mongo and mysql – are specified giving the following information:

    1. The data source. This is specified in the connection option.

    2. The remote table or view to access. This can be specified within the connection string or using specific CONNECT options.

    3. The column definitions. This can be also left to CONNECT to find them using the discovery MariaDB feature.

    4. The optional Quoted option. Can be set to 1 to quote the identifiers in the query sent to the remote server. This is required if columns or table names can contain blanks.

    The way this works is by establishing a connection to the external data source and by sending it an SQL statement (or its equivalent using API functions for MONGO) enabling it to execute the original query. To enhance performance, it is necessary to have the remote data source do the maximum processing. This is needed in particular to reduce the amount of data returned by the data source.

    This is why, for SELECT queries, CONNECT uses the MariaDB feature to retrieve the maximum of the where clause of the original query that can be added to the query sent to the data source. This is automatic and does not require anything to be done by the user.

    However, more can be done. In addition to accessing a remote table, CONNECT offers the possibility to specify what the remote server must do. This is done by specifying it as a view in the srcdef option:

    Doing so, the group by clause are done by the remote server considerably reducing the amount of data sent back on the connection.

    This may even be increased by adding to the srcdef part of the “compatible” part of the query where clauses like this are done for table-based tables. Note that for MariaDB, this table has two columns, country and customers. Supposing the original query is:

    How can we make the where clause be added to the sent srcdef? There are many problems:

    1. Where to include the additional information.

    2. What about the use of alias.

    3. How to know what are a where clause or a having clause.

    The first problem is solved by preparing the srcdef view to receive clauses. The above example srcdef becomes:

    The %s in the srcdef are place holders for eventual compatible parts of the original query where clause. If the select query does not specify a where clause, or a gives an unacceptable where clause, place holders are filled by dummy clauses (1=1).

    The other problems must be solved by adding to the create table a list of columns that must be translated because they are aliases or/and aliases on aggregate functions that must become a having clause. For example, in this case:

    This is specified by the alias option, to be used in the option list. It is made of a semi-colon separated list of items containing:

    1. The local column name (alias in the remote server)

    2. An equal sign.

    3. An eventual ‘*’ indicating this is column correspond to an aggregate function.

    4. The remote column name.

    With this information, CONNECT are able to make the query sent to the remote data source:

    Note: Some data sources, including MySQL and MariaDB, accept aliases in the having clause. In that case, the alias option could have been specified as:

    Another option exists, phpos, enabling to specify what place holders are present and in what order. To be specified as “W”, “WH”, “H”, or “HW”. It is rarely used because by default CONNECT can set it from the srcdef content. The only cases it is needed is when the srcdef contains only a having place holder or when the having place holder occurs before the where place holder, which can occur on queries containing joins. CONNECT cannot handle more than one place holder of each type.

    SRCDEF is not available for MONGO tables, but other ways of achieving this exist and are described in the MONGO table type chapter.

    This page is licensed: CC BY-SA / Gnu FDL

    Compiling JSON UDFs in a Separate Library

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Although the JSON UDFs can be nicely included in the CONNECT library module, there are cases when you may need to have them in a separate library.

    This is when CONNECT is compiled embedded, or if you want to test or use these UDFs with other MariaDB versions not including them.

    To make it, you need to have access to the most recent MariaDB source code. Then, make a project containing these files:

    1. jsonudf.cpp

    2. json.cpp

    3. value.cpp

    4. osutil.c

    5. plugutil.cpp

    6. maputil.cpp

    7. jsonutil.cpp

    jsonutil.cpp is not distributed with the source code, you will have to make it from the following:

    You can create the file by copy/paste from the above.

    Set all the additional include directories to the MariaDB include directories used in plugin compiling plus the reference of the storage/connect directories, and compile like any other UDF giving any name to the made library module (I used jsonudf.dll on Windows).

    Then you can create the functions using this name as the soname parameter.

    There are some restrictions when using the UDFs this way:

    • The variable cannot be accessed. The group size is set and retrieved using the and functions (previously 100).

    • In case of error, warnings are replaced by messages sent to stderr.

    • No trace.

    This page is licensed: CC BY-SA / Gnu FDL

    InnoDB Buffer Pool

    A comprehensive guide to the InnoDB Buffer Pool, the key memory area for caching data and indexes, including configuration and resizing tips.

    The InnoDB storage engine in MariaDB Enterprise Server utilizes the Buffer Pool as a crucial in-memory cache. This Buffer Pool stores recently accessed data pages, enabling faster retrieval for subsequent requests. Recognizing patterns of access, InnoDB also employs predictive prefetching, caching nearby pages when sequential access is detected. To manage memory efficiently, a least recently used (LRU) algorithm is used to evict older, less frequently accessed pages.

    To optimize server restarts, the Buffer Pool's contents can be preserved across shutdowns. At shutdown, the page numbers of all pages residing in the Buffer Pool are recorded. Upon the next startup, InnoDB reads this dump of page numbers and reloads the corresponding data pages from their respective data files, effectively avoiding a "cold" cache scenario.

    The size of each individual page within the Buffer Pool is determined by the setting of the innodb_page_size system variable.

    How the Buffer Pool Works

    The buffer pool attempts to keep frequently-used blocks in the buffer, and so essentially works as two sublists, a new sublist of recently-used information, and an old sublist of older information. By default, 37% of the list is reserved for the old list.

    When new information is accessed that doesn't appear in the list, it is placed at the top of the old list, the oldest item in the old list is removed, and everything else bumps back one position in the list.

    When information is accessed that appears in the old list, it is moved to the top the new list, and everything above moves back one position.

    innodb_buffer_pool_size

    The most important is . This size should contain most of the active data set of your server so that SQL request can work directly with information in the buffer pool cache. Starting at several gigabytes of memory is a good starting point if you have that RAM available. Once warmed up to its normal load there should be very few compared to . Look how these values change over a minute. If the change in is less than 1% of the change in then you have a good amount of usage. If you are getting the status variable increasing then you don't have enough buffer pool (or your flushing isn't occurring frequently enough).

    The larger the size, the longer it takes to initialize. On a 64-bit server with a 10GB memory pool, this can take five seconds or longer.

    Make sure that the size is not too large, because this can cause swapping, which more than undoes the benefits of a large buffer pool.

    The buffer pool can be set dynamically. See .

    Buffer Pool Changes

    From MariaDB 10.11.12 / 11.4.6 / 11.8.2, there are significant changes to the InnoDB buffer pool behavior.

    MariaDB server deprecates and ignores the . Now, the buffer pool size is changed in arbitrary 1-megabyte increments, all the way up to , which must be specified at startup.

    If innodb_buffer_pool_size_max is 0 or not specified, it defaults to the value.

    The variable specifies the minimum size the buffer pool can be shrunk to by a memory pressure event. When a memory pressure event occurs, MariaDB server attempts to shrink innodb_buffer_pool_size halfway between its current value and the innodb_buffer_pool_size_auto_min value. If innodb_buffer_pool_size_auto_min is not specified or 0, its default value is adjusted to innodb_buffer_pool_size — in other words, memory pressure events are disregarded by default.

    The minimum innodb_buffer_pool_size is 320 pages (256*5/4). With the default value of innodb_page_size=16k, this corresponds to 5 MiB. However, since innodb_buffer_pool_size includes the memory allocated for the block descriptors, the minimum is effectively innodb_buffer_pool_size=6m.

    When the buffer pool is shrunk, InnoDB tries to inform the operating system that the underlying memory for part of the virtual address range is no longer needed and may be zeroed out. On many POSIX-like systems this is done by madvise(MADV_DONTNEED) where available (Linux, FreeBSD, NetBSD, OpenBSD, Dragonfly BSD, IBM AIX, Apple macOS). On Microsoft Windows, VirtualFree(MEM_DECOMMIT) is invoked. On many systems, there is also MADV_FREE, which would be a deferred variant of MADV_DONTNEED, not freeing the virtual memory mapping immediately. We prefer immediate freeing so that the resident set size of the process reflects the current innodb_buffer_pool_size value. Shrinking the buffer pool is a rarely executed intensive operation, and the immediate configuration of the MMU mappings should not incur significant additional penalty.

    The variable is removed. Issuing SET GLOBAL innodb_buffer_pool_size blocks until the buffer pool has been resized or the operation was aborted by a KILL or SHUTDOWN command, a client disconnect, or an interrupt.

    innodb_old_blocks_pct and innodb_old_blocks_time

    The default 37% reserved for the old list can be adjusted by changing the value of . It can accept anything between 5% and 95%.

    The variable specifies the delay before a block can be moved from the old to the new sublist. 0 means no delay, while the default has been set to 1000.

    Before changing either of these values from their defaults, make sure you understand the impact and how your system currently uses the buffer. Their main reason for existence is to reduce the impact of full table scans, which are usually infrequent, but large, and previously could clear everything from the buffer. Setting a non-zero delay could help in situations where full table scans are performed in quick succession.

    Temporarily changing these values can also be useful to avoid the negative impact of a full table scan, as explained in .

    Dumping and Restoring the Buffer Pool

    When the server starts, the buffer pool is empty. As it starts to access data, the buffer pool will slowly be populated. As more data are accessed, the most frequently accessed data are put into the buffer pool, and old data may be evicted. This means that a certain period of time is necessary before the buffer pool is really useful. This period of time is called the warmup.

    InnoDB can dump the buffer pool before the server shuts down, and restore it when it starts again. If this feature is used, no warmup is necessary. Use the and system variables to enable or disable the buffer pool dump at shutdown and the restore at startup respectively.

    It is also possible to dump the InnoDB buffer pool at any moment while the server is running, and it is possible to restore the last buffer pool dump at any moment. To do this, the special and system variables can be set to ON. When selected, their value is always OFF.

    A buffer pool restore, both at startup or at any other moment, can be aborted by setting to ON.

    The file which contains the buffer pool dump is specified via the system variable.

    See Also

    This page is licensed: CC BY-SA / Gnu FDL

    Adding the REST Feature as a Library Called by an OEM Table

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    If you are using a version of MariaDB that does not support REST, this is how the REST feature can be added as a library called by an OEM table.

    Before making the REST OEM module, the Microsoft Casablanca package must be installed as for compiling MariaDB from source.

    Even if this module is to be used with a binary distribution, you need some CONNECT source files in order to successfully make it. It is made with four files existing in the version 1.06.0010 of CONNECT: tabrest.cpp, restget.cpp, tabrest.h and mini-global.h. It also needs the CONNECT header files that are included in tabrest.cpp and the ones they can include. This can be obtained by going to a recent download site of a version of MariaDB that includes the REST feature, downloading the MariaDB source file tar.gz and extracting from it the CONNECT sources files in a directory that are added to the additional source directories if it is not the directory containing the above files.

    On Windows, use a recent version of Visual Studio. Make a new empty DLL project and add the source files tabrest.cpp and restget.cpp. Visual studio should automatically generate all necessary connections to the cpprest SDK. Just edit the properties of the project to add the additional include directory (the one where the CONNECT source was downloaded) et the link to the ha_connect.lib of the binary version of MariaDB (in the same directory than ha_connect.dll in your binary distribution). Add the preprocessor definition XML_SUPPORT. Also set in the linker input page of the project property the Module definition File to the rest.def file (with its full path) also existing in the CONNECT source files. If you are making a debug configuration, make sure that in the C/C++ Code generation page the Runtime library line specifies Multi-threaded Debug DLL (/MDd) or your server will crash when using the feature.

    This is not really simple but it is nothing compared with Linux! Someone having made an OEM module for its own application have written:

    For whatever reason, g++ / ld on Linux are both extremely picky about what they will and won't consider a "library" for linking purposes. In order to get them to recognize and therefore find ha_connect.so as a "valid" linkable library, ha_connect.so must exist in a directory whose path is in /etc/ld.so.conf or /etc/ld.so.conf.d/ha_connect.conf AND its filename must begin with "lib".

    On Fedora, you can make a link to ha_connect.so by:

    This provides a library whose name begins with “lib”. It was made in /usr/lib64/ because it was the directory of the libcpprest.so Casablanca library. This solved the need of a file in /etc/ld.so.conf.d as this was already done for the cpprest library. Note that the -s parameter is a must, without it all sort of nasty errors are met when using the feature.

    Then compile and install the OEM module with:

    The oemrest.mak file:

    The SD and CD variables are the directories of the CONNECT source files and the one containing the libcpprest.so lib. They can be edited to match those on your machine OD is the directory that was made to contain the object files.

    A very important flag is -fno-rtti. Without it you would be in big trouble.

    The resulting module, for instance rest.so or rest.dll, must be placed in the plugin directory of the MariaDB server. Then, you are able to use NoSQL tables simply replacing in the CREATE TABLE statement the TABLE_TYPE option =JSON or XML by TABLE_TYPE=OEM SUBTYPE=REST MODULE=’rest.(so|dll)’. Actually, the module name, here supposedly ‘rest’, can be anything you like.

    The file type is JSON by default. If not, it must be specified like this:

    To be added to the create table statement. For instance:

    Note: this last example returns an XML file whose format was not recognized by old CONNECT versions. It is here the reason of the option ‘Rownode=weatherdata’.

    If you have trouble making the module, you can post an issue on .

    This page is licensed: CC BY-SA / Gnu FDL

    AUTO_INCREMENT Handling in InnoDB

    This page explains how InnoDB manages AUTO_INCREMENT columns, including initialization behavior, gap handling, and potential restart effects.

    AUTO_INCREMENT Lock Modes

    The innodb_autoinc_lock_mode system variable determines the lock mode when generating AUTO_INCREMENT values for InnoDB tables. These modes allow InnoDB to make significant performance optimizations in certain circumstances.

    The innodb_autoinc_lock_mode system variable may be removed in a future release. See MDEV-19577 for more information.

    Traditional Lock Mode

    When is set to 0, uses the traditional lock mode.

    In this mode, holds a table-level lock for all statements until the statement completes.

    Consecutive Lock Mode

    When is set to 1, uses the consecutive lock mode.

    In this mode, holds a table-level lock for all bulk statements (such as or ) until the end of the statement. For simple statements, no table-level lock is held. Instead, a lightweight mutex is used which scales significantly better. This is the default setting.

    Interleaved Lock Mode

    When is set to 2, uses the interleaved lock mode.

    In this mode, does not hold any table-level locks at all. This is the fastest and most scalable mode, but is not safe for replication.

    Setting AUTO_INCREMENT Values

    The value for an table can be set for a table by executing the statement and specifying the table option:

    However, in and before, stores the table's counter in memory. In these versions, when the server restarts, the counter is re-initialized to the highest value found in the table. This means that the above operation can be undone if the server is restarted before any rows are written to the table.

    In and later, the counter is persistent, so this restriction is no longer present. Persistent, however, does not mean transactional. Gaps may still occur in some cases, such as if a statement fails, or if a user executes or .

    For example:

    If the server is restarted at this point, then the counter will revert to 101, which is the persistent value set as part of the failed .

    See Also

    • - an alternative to auto_increment available from

    This page is licensed: CC BY-SA / Gnu FDL

    InnoDB Asynchronous I/O

    Explore how InnoDB uses asynchronous I/O on various operating systems to handle multiple read and write requests concurrently without blocking.

    From , InnoDB uses asynchronous I/O to read from and write to disk asynchronously. This forms part of the InnoDB Background Thread Pool.

    Stages

    Each asynchronous IO operation goes through multiple stages:

    1. SUBMITTED – The IO operation is initiated.

    • For asynchronous writes, this typically occurs in the buffer pool flushing code.

    • For asynchronous reads, this may happen during buffer pool loading at startup or in prefetching logic.

    1. COMPLETED_IN_OS – The operating system notifies InnoDB that the I/O operation is complete.

    • If using libaio or io_uring, a dedicated thread handles this notification.

    • The completed IO operation is then submitted to InnoDB’s internal thread pool (tpool).

    1. EXECUTING_COMPLETION_TASK – A tpool thread processes the completion task for the IO operation.

    2. COMPLETED – The IO operation is fully handled.

    Resource Constraints and Queuing Mechanisms

    Waiting for IO Slots

    The total number of pending asynchronous IO operations is limited by:

    where number_of_IO_threads refers to either or .

    Each IO operation is associated with an IO slot, which contains necessary metadata such as the file handle, operation type, offset, length, and any OS error codes. Initially, all total_count slots are free, but as pending IO requests accumulate, slots get occupied. If all slots are in use, additional IO requests must wait for a free slot.

    Queuing Mechanism

    The number of completion tasks (EXECUTING_COMPLETION_TASK stage) that can run in parallel is also limited by or . If too many IO operations complete simultaneously, they cannot all be processed in parallel and must be queued, respecting the thread limit.

    Variables

    From , a number of status variables were added to give insight into the above operations:

    • – Number of read IO operations currently in progress (from SUBMITTED to COMPLETED).

    • – Number of read IO operations currently in the EXECUTING_COMPLETION_TASK state.

    • – Total number of read completion tasks that have finished execution.

    • – Current size of the queue (see ).

    Similar variables exist for write operations:

    This page is licensed: CC BY-SA / Gnu FDL

    CONNECT Table Types - Data Files

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Most of the tables processed by CONNECT are just plain DOS or UNIX data files, logically regarded as tables thanks to the description given when creating the table. This description comes from the statement. Depending on the application, these tables can already exist as data files, used as is by CONNECT, or can have been physically made by CONNECT as the result of a CREATE TABLE ... SELECT ... and/or INSERT statement(s).

    The file path/name is given by the FILE_NAME option. If it is a relative path/name, it are relative to the database directory, the one containing the table .FRM

    CONNECT - Files Retrieved Using Rest Queries

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Starting with , JSON, XML and possibly CSV data files can be retrieved as results from REST queries when creating or querying such tables. This is done internally by CONNECT using the CURL program generally available on all systems (if not just install it).

    This can also be done using the Microsoft Casablanca (cpprestsdk) package. To enable it, first, install the package as explained in . Then make the GetRest library (dll or so) as explained in .

    Note: If both are available, cpprestsdk is used preferably because it is faster. This can be changed by specifying ‘curl=1’ in the option list.

    Note: If you want to use this feature with an older distributed version of MariaDB not featuring REST, it is possible to add it as an OEM module as explained in

    Making the GetRest Library

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    To enable the REST feature with binary distributions of MariaDB, the function calling the cpprestsdk package is not included in CONNECT, thus allowing CONNECT normal operation when the cpprestsdk package is not installed. Therefore, it must be compiled separately as a library (so or dll) that are loaded by CONNECT when needed.

    This library will contain only one file shown here:

    This file exists in the source of CONNECT as restget.cpp. If you have no access to the source, use your favorite editor to make it by copy/pasting from the above.

    Then, on Linux, compile the GetRest.so library:

    InnoDB Change Buffering

    Learn about the change buffer, an optimization that delays secondary index writes to reduce I/O overhead for non-unique index modifications.

    The change buffer has been disabled by default from , , and (), was deprecated and ignored from (), and was removed in ().

    Benchmarks attached to show that the change buffer sometimes reduces performance, and in the best case seem to bring a few per cent improvement to throughput. However, such improvement could come with a price: If the buffered changes are never merged (, motivated by the reduction of random crashes and the removal of an innodb_force_recovery option that can inflict further corruption), then the InnoDB system tablespace can grow out of control ().

    INSERT

    InnoDB Limitations

    A list of constraints and limits within the InnoDB engine, including maximum table size, column counts, and index key lengths.

    The has the following limitations.

    Limitations on Schema

    • InnoDB tables can have a maximum of 1,017 columns. This includes .

    InnoDB Undo Log

    The undo log stores the "before" image of data modified by active transactions, supporting rollbacks and consistent read views.

    Overview

    When a writes data, it always inserts them in the table indexes or data (in the buffer pool or in physical files). No private copies are created. The old versions of data being modified by active transactions are stored in the undo log. The original data can then be restored, or viewed by a consistent read.

    SHOW GLOBAL VARIABLES LIKE 'default_storage_engine';
    +------------------------+--------+
    | Variable_name          | Value  |
    +------------------------+--------+
    | default_storage_engine | InnoDB |
    +------------------------+--------+
    SHOW SESSION VARIABLES LIKE 'default_storage_engine';
    +------------------------+--------+
    | Variable_name          | Value  |
    +------------------------+--------+
    | default_storage_engine | InnoDB |
    +------------------------+--------+
    SET GLOBAL default_storage_engine='MyRocks';
    SET SESSION default_storage_engine='MyRocks';
    [mariadb]
    ...
    default_storage_engine=MyRocks
    SHOW ENGINES;
    CREATE TABLE accounts.messages (
      id INT PRIMARY KEY AUTO_INCREMENT,
      sender_id INT,
      receiver_id INT,
      message TEXT
    ) ENGINE = MyRocks;
    CREATE TABLE csv_test (x INT, y DATE, z CHAR(10)) ENGINE=CSV;
    ERROR 1178 (42000): The storage engine for the table doesn't support nullable columns
    CREATE TABLE csv_test (
      x INT NOT NULL, y DATE NOT NULL, z CHAR(10) NOT NULL
      ) ENGINE=CSV;
    INSERT INTO csv_test VALUES
        (1,CURDATE(),'one'),
        (2,CURDATE(),'two'),
        (3,CURDATE(),'three');
    SELECT * FROM csv_test;
    +---+------------+-------+
    | x | y          | z     |
    +---+------------+-------+
    | 1 | 2011-11-16 | one   |
    | 2 | 2011-11-16 | two   |
    | 3 | 2011-11-16 | three |
    +---+------------+-------+
    $ cat csv_test.CSV
    1,"2011-11-16","one"
    2,"2011-11-16","two"
    3,"2011-11-16","three"
    INSTALL SONAME 'ha_archive';
    [mariadb]
    ...
    plugin_load_add = ha_archive
    UNINSTALL SONAME 'ha_archive';
    CREATE TABLE xboy ENGINE=CONNECT 
      table_type=PROXY tabname=boys;
    CREATE TABLE xboy ENGINE=CONNECT tabname=boys;
    ... option_list='Password=mypass';
    ... option_list='user=proxy';
    CREATE TABLE city (
      city VARCHAR(11),
      boy CHAR(12) flag=1,
      birth DATE)
    ENGINE=CONNECT tabname=boys;
    SELECT * FROM city;
    [mariadb]
    ...
    innodb_read_io_threads=8
    innodb_write_io_threads=8
    SET GLOBAL innodb_read_io_threads=8;
    SET GLOBAL innodb_write_io_threads=8;
    
    SHOW GLOBAL VARIABLES
       LIKE 'innodb_%_io_threads';
    +-------------------------+-------+
    | Variable_name           | Value |
    +-------------------------+-------+
    | innodb_read_io_threads  | 8     |
    | innodb_write_io_threads | 8     |
    +-------------------------+-------+
    DROP TABLE innodb_monitor;
    CREATE TABLE innodb_lock_monitor (a INT) ENGINE=INNODB;
    DROP TABLE innodb_lock_monitor;
    CREATE TABLE innodb_tablespace_monitor (a INT) ENGINE=INNODB;
    DROP TABLE innodb_tablespace_monitor;
    CREATE TABLE innodb_table_monitor (a INT) ENGINE=INNODB;
    DROP TABLE innodb_table_monitor;

    Dallas

    James

    1992-05-13

    Boston

    Bill

    1986-09-11

    spatial
    transactions
    virtual columns
    table discovery
    OPTIMIZE TABLE
    REPAIR TABLE
    INSERT DELAYED

    Quantity

    Set by innodb_read_io_threads and innodb_write_io_threads

    Configure the InnoDB I/O Threads

    MariaDB Enterprise Server
    server system variable
    innodb_buffer_pool_size
    innodb_buffer_pool_reads
    innodb_buffer_pool_read_requests
    innodb_buffer_pool_reads
    innodb_buffer_pool_read_requests
    innodb_buffer_pool_wait_free
    Setting Innodb Buffer Pool Size Dynamically
    innodb_buffer_pool_chunk_size
    innodb_buffer_pool_size_max
    innodb_buffer_pool_size
    innodb_buffer_pool_size_auto_min
    Innodb_buffer_pool_resize_status
    innodb_old_blocks_pct
    innodb_old_blocks_time
    InnoDB logical backups
    innodb_buffer_pool_dump_at_shutdown
    innodb_buffer_pool_load_at_startup
    innodb_buffer_pool_dump_now
    innodb_buffer_pool_load_now
    innodb_buffer_pool_load_abort
    innodb_buffer_pool_filename
    InnoDB Change Buffering
    Information Schema INNODB_BUFFER_POOL_STATS Table
    Setting Innodb Buffer Pool Size Dynamically

    This page is licensed: GPLv2

    binary tarballs
    MariaDB Package Repository setup script
    MariaDB Repository Configuration Tool
    RPM package
    yum
    dnf
    DEB package
    apt-get
    RPM package
    zypper
    INSTALL SONAME
    INSTALL PLUGIN
    --plugin-load
    --plugin-load-add
    mysqld
    option group
    option file
    UNINSTALL SONAME
    UNINSTALL PLUGIN
    --plugin-load
    --plugin-load-add
    option group
    option file
    unixODBC
    PLUGIN OVERVIEW
    Mongo
    MySQL
    PROXY
    cond_push
    connect_json_grp_size
    jsonset_grp_size
    jsonget_grp_size
    Note: You can replace -O3 by -g to make a debug version.

    This library should be placed where it can be accessed. A good place is the directory where the libcpprest.so is, for instance /usr/lib64. You can move or copy it there.

    On windows, using Visual Studio, make an empty win32 dll project named GetRest and add it the above file. Also add it the module definition file restget.def:

    Important: This file must be specified in the property linker input page.

    Once compiled, the release or debug versions can be copied in the cpprestsdk corresponding directories, bin or debug\bin.

    That is all. It is a once-off job. Once done, it will work with all new MariaDB versions featuring CONNECT version 1.07.

    Note: the xt tracing variable is true when connect_xtrace setting includes the value “MONGO” (512).

    Caution: If your server crashes when using this feature, this is likely because the GetRest lib is linked to the wrong cpprestsdk lib (this may only apply on Windows) A Release version of GetRest must be linked to the release version of the cpprestsdk lib (cpprest_2_10.dll) but if you make a Debug version of GetRest, make sure it is linked to the Debug version of cpprestsdk lib (cpprest_2_10d.dll) This may be automatic if you use Visual Studio to make the GetRest.dll.

    This page is licensed: CC BY-SA / Gnu FDL

    g++ -o GetRest.so -O3 -Wall -std=c++11 -fPIC -shared restget.cpp -lcpprest
    LIBRARY REST
    EXPORTS
       restGetFile     @1
    sudo yum install MariaDB-connect-engine
    sudo apt-get install mariadb-plugin-connect
    sudo zypper install MariaDB-connect-engine
    INSTALL SONAME 'ha_connect';
    [mariadb]
    ...
    plugin_load_add = ha_connect
    UNINSTALL SONAME 'ha_connect';
    sudo yum install unixODBC
    sudo apt-get install libodbc1
    INSTALL SONAME 'ha_connect';
    ERROR 1126 (HY000): Can't open shared library '/home/ian/MariaDB_Downloads/10.1.17/lib/plugin/ha_connect.so' 
      (errno: 2, libodbc.so.1: cannot open shared object file: No such file or directory)
    CREATE TABLE custnum ENGINE=CONNECT TABLE_TYPE=XXX
    CONNECTION='connecton string'
    SRCDEF='select pays as country, count(*) as customers from custnum group by pays';
    SELECT * FROM custnum WHERE (country = 'UK' OR country = 'USA') AND customers > 5;
    SRCDEF='select pays as country, count(*) as customers from custnum where %s group by pays having %s';
    CREATE TABLE custnum ENGINE=CONNECT TABLE_TYPE=XXX
    CONNECTION='connecton string'
    SRCDEF='select pays as country, count(*) as customers from custnum where %s group by pays having %s'
    OPTION_LIST='Alias=customers=*count(*);country=pays';
    SELECT pays AS country, COUNT(*) AS customers FROM custnum WHERE (pays = 'UK' OR pays = 'USA') GROUP BY country HAVING COUNT(*) > 5
    OPTION_LIST='Alias=customers=*;country=pays';
    #include "my_global.h"
    #include "mysqld.h"
    #include "plugin.h"
    #include <stdlib.h>
    #include <string.h>
    #include <stdio.h>
    #include <errno.h>
    
    #include "global.h"
    
    extern "C" int GetTraceValue(void) { return 0; }
    uint GetJsonGrpSize(void) { return 100; }
    
    /***********************************************************************/
    /*  These replace missing function of the (not used) DTVAL class.      */
    /***********************************************************************/
    typedef struct _datpar   *PDTP;
    PDTP MakeDateFormat(PGLOBAL, PSZ, bool, bool, int) { return NULL; }
    int ExtractDate(char*, PDTP, int, int val[6]) { return 0; }
    
    
    #ifdef __WIN__
    my_bool CloseFileHandle(HANDLE h)
    {
    	return !CloseHandle(h);
    } /* end of CloseFileHandle */
    
    #else  /* UNIX */
    my_bool CloseFileHandle(HANDLE h)
    {
    	return (close(h)) ? TRUE : FALSE;
    }  /* end of CloseFileHandle */
    
    int GetLastError()
    {
    	return errno;
    }  /* end of GetLastError */
    
    #endif  // UNIX
    
    /***********************************************************************/
    /*  Program for sub-allocating one item in a storage area.             */
    /*  Note: This function is equivalent to PlugSubAlloc except that in   */
    /*  case of insufficient memory, it returns NULL instead of doing a    */
    /*  long jump. The caller must test the return value for error.        */
    /***********************************************************************/
    void *PlgDBSubAlloc(PGLOBAL g, void *memp, size_t size)
    {
      PPOOLHEADER pph;                        // Points on area header.
    
      if (!memp)  	//  Allocation is to be done in the Sarea
        memp = g->Sarea;
    
      size = ((size + 7) / 8) * 8;  /* Round up size to multiple of 8 */
      pph = (PPOOLHEADER)memp;
    
      if ((uint)size > pph->FreeBlk) { /* Not enough memory left in pool */
        sprintf(g->Message,
         "Not enough memory in Work area for request of %d (used=%d free=%d)",
    			(int)size, pph->To_Free, pph->FreeBlk);
        return NULL;
      } // endif size
    
      // Do the suballocation the simplest way
      memp = MakePtr(memp, pph->To_Free);   // Points to sub_allocated block
      pph->To_Free += size;                 // New offset of pool free block
      pph->FreeBlk -= size;                 // New size   of pool free block
    
      return (memp);
    } // end of PlgDBSubAlloc
    /************* Restget C++ Program Source Code File (.CPP) *************/
    /* Adapted from the sample program of the Casablanca tutorial.         */
    /* Copyright Olivier Bertrand 2019.                                    */
    /***********************************************************************/
    #include <cpprest/filestream.h>
    #include <cpprest/http_client.h>
    
    using namespace utility::conversions; // String conversions utilities
    using namespace web;                  // Common features like URIs.
    using namespace web::http;            // Common HTTP functionality
    using namespace web::http::client;    // HTTP client features
    using namespace concurrency::streams; // Asynchronous streams
    
    typedef const char* PCSZ;
    
    extern "C" int restGetFile(char* m, bool xt, PCSZ http, PCSZ uri, PCSZ fn);
    
    /***********************************************************************/
    /*  Make a local copy of the requested file.                           */
    /***********************************************************************/
    int restGetFile(char *m, bool xt, PCSZ http, PCSZ uri, PCSZ fn)
    {
      int  rc = 0;
      auto fileStream = std::make_shared<ostream>();
    
      if (!http || !fn) {
    		strcpy(m, "Missing http or filename");
    		return 2;
      } // endif
    
    	if (xt)
    		fprintf(stderr, "restGetFile: fn=%s\n", fn);
    
      // Open stream to output file.
      pplx::task<void> requestTask = fstream::open_ostream(to_string_t(fn))
        .then([=](ostream outFile) {
          *fileStream= outFile;
    
    			if (xt)
    				fprintf(stderr, "Outfile isopen=%d\n", outFile.is_open());
    
          // Create http_client to send the request.
          http_client client(to_string_t(http));
    
          if (uri) {
            // Build request URI and start the request.
            uri_builder builder(to_string_t(uri));
            return client.request(methods::GET, builder.to_string());
          } else
            return client.request(methods::GET);
        })
    
        // Handle response headers arriving.
        .then([=](http_response response) {
    			if (xt)
    				fprintf(stderr, "Received response status code:%u\n",
                                      response.status_code());
    
          // Write response body into the file.
          return response.body().read_to_end(fileStream->streambuf());
        })
    
        // Close the file stream.
        .then([=](size_t n) {
    			if (xt)
    				fprintf(stderr, "Return size=%zu\n", n);
    
          return fileStream->close();
        });
    
      // Wait for all the outstanding I/O to complete and handle any exceptions
      try {
    		if (xt)
    			fprintf(stderr, "Waiting\n");
    
    		requestTask.wait();
      } catch (const std::exception &e) {
    		if (xt)
    			fprintf(stderr, "Error exception: %s\n", e.what());
    
    		sprintf(m, "Error exception: %s", e.what());
    		rc= 1;
      } // end try/catch
    
    	if (xt)
    		fprintf(stderr, "restget done: rc=%d\n", rc);
    
      return rc;
    } // end of restGetFile
    .
    OQGRAPH allows you to handle hierarchies (tree structures) and complex graphs (nodes having many connections in several directions).
    utilizes a massively parallel distributed data architecture and is designed for big data scaling to process petabytes of data.
  • CONNECT allows access to different kinds of text files and remote resources as if they were regular MariaDB tables.

  • The CSV storage engine can read and append to files stored in CSV (comma-separated-values) format. However, since MariaDB 10.0, CONNECT is a better choice and is more flexibly able to read and write such files.

  • InnoDB is a good general transaction storage engine, and the best choice in most cases. It is the default storage engine.

  • The MERGE storage engine is a collection of identical MyISAM tables that can be used as one. "Identical" means that all tables have identical column and index information.

  • MEMORY does not write data on-disk (all rows are lost on crash) and is best-used for read-only caches of data from other tables, or for temporary work areas. With the default InnoDB and other storage engines having good caching, there is less need for this engine than in the past.

  • Mroonga provides fast CJK-ready full text searching using column store.

  • MyISAM has a small footprint and allows for easy copying between systems. MyISAM is MySQL's oldest storage engine. There is usually little reason to use it except for legacy purposes. Aria is MariaDB's more modern improvement.

  • MyRocks enables greater compression than InnoDB, as well as less write amplification giving better endurance of flash storage and improving overall throughput.

  • OQGRAPH allows you to handle hierarchies (tree structures) and complex graphs (nodes having many connections in several directions).

  • S3 Storage Engine is a read-only storage engine that stores its data in Amazon S3.

  • Sequence allows the creation of ascending or descending sequences of numbers (positive integers) with a given starting value, ending value and increment, creating virtual, ephemeral tables automatically when you need them.

  • SphinxSE is used as a proxy to run statements on a remote Sphinx database server (mainly useful for advanced fulltext searches).

  • Spider uses partitioning to provide data sharding through multiple servers.

  • InnoDB
    Aria
    MyISAM
    MyISAM
    XtraDB
    Spider
    MERGE
    MyISAM
    MyRocks
    Archive
    CSV
    SphinxSE
    Mroonga
    MEMORY
    InnoDB
    S3 Storage Engine
    Sequence
    BLACKHOLE
    replication
    Archive
    Aria
    BLACKHOLE
    replication

    innodb_async_reads_wait_slot_sec – Total wait time for a free IO slot (see Waiting for IO Slots).

  • innodb_async_reads_total_enqueues – Total number of read operations that were queued (see Queuing Mechanism). Includes those still waiting and making up innodb_async_reads_queue_size.

  • innodb_async_writes_wait_slot_sec
  • innodb_async_writes_total_enqueues

  • innodb_io_read_threads
    innodb_io_write_threads
    innodb_io_read_threads
    innodb_io_write_threads
    innodb_async_reads_pending
    innodb_async_reads_tasks_running
    innodb_async_reads_total_count
    innodb_async_reads_queue_size
    Queuing Mechanism
    innodb_async_writes_pending
    innodb_async_writes_tasks_running
    innodb_async_reads_total_count
    innodb_async_writes_queue_size
    file.

    Unless specified, the maturity of file table types is stable.

    Multiple File Tables

    A multiple file table is one that is physically contained in several files of the same type instead of just one. These files are processed sequentially during the process of a query and the result is the same as if all the table files were merged into one. This is great to process files coming from different sources (such as cash register log files) or made at different time periods (such as bank monthly reports) regarded as one table. Note that the operations on such files are restricted to sequential Select and Update; and that VEC multiple tables are not supported by CONNECT. The file list depends on the setting of the multiple option of the CREATE TABLE statement for that table.

    Multiple tables are specified by the option MULTIPLE=n, which can take This storage engine has been deprecated.four values:

    0

    Not a multiple table (the default). This can be used in an statement.

    1

    The table is made from files located in the same directory. The FILE_NAME option is a pattern such as 'cash*.log' that all the table file path/names verify.

    2

    The FILE_NAME gives the name of a file that contains the path/names of all the table files. This file can be made using a DIR table.

    3

    Like multiple=1 but also including eligible files from the directory sub-folders.

    The FILEID special column, described here, allows query pruning by filtering the file list or doing some grouping on the files that make a multiple table.

    Note: Multiple was not initially implemented for XML tables. This restriction was removed in version 1.02.

    Record Format

    This characteristic applies to table files handled by the operating system input/output functions. It is fixed for table types FIX, BIN, DBF and VEC, and it is variable for DOS, VCT, FMT and some JSON tables.

    For fixed tables, most I/O operations are done by block of BLOCK_SIZE rows. This diminishes the number of I/O’s and enables block indexing.

    Starting with CONNECT version 1.6.6, the BLOCK_SIZE option can also be specified for variable tables. Then, a file similar to the block indexing file is created by CONNECT that gives the size in bytes of each block of BLOCK_SIZE rows. This enables the use of block I/Os and block indexing to variable tables. It also enables CONNECT to return the exact row number for info commands

    File Mapping

    For file-based tables of reasonable size, processing time can be greatly enhanced under Windows(TM) and some flavors of UNIX or Linux by using the technique of “file mapping”, in which a file is processed as if it were entirely in memory. Mapping is specified when creating the table by the use of the MAPPED=YES option. This does not apply to tables not handled by system I/O functions (XML and INI).

    Big File Tables

    Because all files are handled by the standard input/output functions of the operating system, their size is limited to 2GB, the maximum size handled by standard functions. For some table types, CONNECT can deal with files that are larger than 2GB, or prone to become larger than this limit. These are the FIX,BIN and VEC types. To tell connect to use input/output functions dealing with big files, specify the option huge=1 or huge=YES for that table. Note however that CONNECT cannot randomly access tables having more than 2G records.

    Compressed File Tables

    CONNECT can make and process some tables whose data file is compressed. The only supported compression format is the gzlib format. Zip and zlib formats are supported differently. The table types that can be compressed are DOS,FIX,BIN,CSV and FMT. This can save some disk space at the cost of a somewhat longer processing time.

    Some restrictions apply to compressed tables:

    • Compressed tables are not indexable.

    • Update and partial delete are not supported.

    Use the numeric compress option to specify a compressed table:

    1. Not compressed

    2. Compressed in gzlib format.

    3. Made of compressed blocks of block_size records (enabling block indexing)

    Relational Formatted Tables

    These are based on files whose records represent one table row. Only the column representation within each record can differ. The following relational formatted tables are supported:

    • DOS and FIX Table Types

    • DBF Table Type

    • BIN Table Type

    • VEC Table Type

    NoSQL Table Types

    These are based on files that do not match the relational format but often represent hierarchical data. CONNECT can handle JSON, INI-CFG, XML and some HTML files..

    The way it is done is different from what PostgreSQL does. In addition to including in a table some column values of a specific data format (JSON, XML) to be handled by specific functions, CONNECT can directly use JSON, XML or INI files that can be produced by other applications and this is the table definition that describes where and how the contained information must be retrieved.

    This is also different from what MariaDB does with dynamic columns, which is close to what MySQL and PostgreSQL do with the JSON column type.

    The following NoSQL types are supported:

    • XML Table Type

    • JSON Table Type

    • INI Table Type

    This page is licensed: GPLv2

    CREATE TABLE
    .

    Creating Tables using REST

    To do so, specify the HTTP of the web client and eventually the URI of the request in the CREATE TABLE statement. For example, for a query returning JSON data:

    As with standard JSON tables, discovery is possible, meaning that you can leave CONNECT to define the columns by analyzing the JSON file. Here you could just do:

    For example, executing:

    returns:

    name
    address

    Leanne Graham

    Kulas Light Apt. 556 Gwenborough 92998-3874 -37.3159 81.1496

    Here we see that for some complex elements such as address, which is a Json object containing values and objects, CONNECT by default has just listed their texts separated by blanks. But it is possible to ask it to analyze in more depth the json result by adding the DEPTH option. For instance:

    Then the table are created as:

    Allowing one to get all the values of the Json result, for example:

    That results in:

    name
    city
    company

    Leanne Graham

    Gwenborough

    Romaguera-Crona

    Ervin Howell

    Wisokyburgh

    Deckow-Crist

    Clementine Bauch McKenziehaven

    Romaguera-Jacobson

    Patricia Lebsack

    South Elvis

    Robel-Corkery

    Of course, the complete create table (obtained by SHOW CREATE TABLE) can later be edited to make your table return exactly what you want to get. See the JSON table type for details about what and how to specify these.

    Note that such tables are read only. In addition, the data are retrieved from the web each time you query the table with a SELECT statement. This is fine if the result varies each time, such as when you query a weather forecasting site. But if you want to use the retrieved file many times without reloading it, just create another table on the same file without specifying the HTTP option.

    Note: For JSON tables, specifying the file name is optional and defaults to tabname.type. However, you should specify it if you want to use the file later for other tables.

    See the JSON table type for changes that will occur in the new CONNECT versions (distributed in early 2021).

    This page is licensed: CC BY-SA / Gnu FDL

    CONNECT version 1.07.0001
    cpprestsdk
    Making the GetRest Library
    Adding the REST Feature as a Library Called by an OEM Table
    ,
    UPDATE
    , and
    DELETE
    statements can be particularly heavy operations to perform, as all indexes need to be updated after each change. For this reason these changes are often buffered.

    Pages are modified in the buffer pool, and not immediately on disk. After all the records that cover the changes to a data page have been written to the InnoDB redo log, the changed page may be written (''flushed'') to a data file. Pages that have been modified in memory and not yet flushed are called dirty pages.

    The Change Buffer is an optimization that allows some data to be modified even though the data page does not exist in the buffer pool. Instead of modifying the data in its final destination, we would insert a record into a special Change Buffer that resides in the system tablespace. When the page is read into the buffer pool for any reason, the buffered changes are applied to it.

    The Change Buffer only contains changes to secondary index leaf pages.

    In the days of old, only inserted rows could be buffered, so this buffer was called Insert Buffer. The old name still appears in several places, for example in the output of SHOW ENGINE INNODB STATUS.

    Inserts to UNIQUE secondary indexes cannot be buffered unless unique_checks=0 is used. This may sometimes allow duplicates to be inserted into the UNIQUE secondary index. Much of the time, the UNIQUE constraint would be checked because the change buffer could only be used if the index page is not located in the buffer pool.

    When rows are deleted, a flag is set, thus rows are not immediately deleted. Delete-marked records may be purged after the transaction has been committed and any read views that were created before the commit have been closed. Delete-mark and purge buffering of any secondary indexes is allowed.

    ROLLBACK never makes use of the change buffer; it would force a merge of any changes that were buffered during the execution of the transaction.

    The Change Buffer is an optimization because:

    • Some random-access page reads are transformed into modifications of change buffer pages.

    • A change buffer page can be modified several times in memory and be flushed to disk only once.

    • Dirty pages are flushed together, so the number of IO operations is lower.

    If the server crashes or is shut down, the Change Buffer might not be empty. The Change Buffer resides in the InnoDB system tablespace, covered by the write-ahead log, so they can be applied at server restart. A shutdown with innodb_fast_shutdown=0 will merge all buffered changes.

    There is no background task that merges the change buffer to the secondary index pages. Changes are only merged on demand.

    The Change Buffer was removed in because it has been a prominent source of corruption bugs that have been extremely hard to reproduce.

    The main server system variable here is innodb_change_buffering, which determines which form of change buffering, if any, to use.

    The following settings are available:

    • inserts

      • Only buffer insert operations

    • deletes

      • Only buffer delete operations

    • changes

      • Buffer both insert and delete operations

    • purges

      • Buffer the actual physical deletes that occur in the background

    • all

      • Buffer inserts, deletes and purges. Default setting from until , , and .

    • none

      • Don't buffer any operations. Default from , , and .

    Modifying the value of this variable only affects the buffering of new operations. The merging of already buffered changes is not affected.

    The innodb_change_buffer_max_size system variable determines the maximum size of the change buffer, expressed as a percentage of the buffer pool.

    See Also

    • InnoDB Buffer Pool

    This page is licensed: CC BY-SA / Gnu FDL

    MariaDB 10.6.7
    MDEV-27734
    MDEV-27735
    MDEV-29694
    MDEV-19514
    MDEV-19514
    MDEV-21952

    InnoDB tables can have a maximum of 64 secondary indexes.

  • A multicolumn index on InnoDB can use a maximum of 32 columns. If you attempt to create a multicolumn index that uses more than 32 columns, MariaDB returns an Error 1070.

  • Limitations on Size

    • With the exception of variable-length columns (that is, VARBINARY, VARCHAR, BLOB and TEXT), rows in InnoDB have a maximum length of roughly half the page size for 4KB, 8KB, 16KB and 32KB page sizes.

    • The maximum size for BLOB and TEXT columns is 4GB. This also applies to LONGBLOB and LONGTEXT.

    • MariaDB imposes a row-size limit of 65,535 bytes for the combined sizes of all columns. If the table contains BLOB or TEXT columns, these only count for 9 - 12 bytes in this calculation, given that their content is stored separately.

    • 32-bit operating systems have a maximum file size limit of 2GB. When working with large tables using this architecture, configure InnoDB to use smaller data files.

    • The maximum size for the combined InnoDB log files is 512GB.

    • With tablespaces, the minimum size is 10MB, the maximum varies depending on the InnoDB Page Size.

    InnoDB Page Size
    Maximum Tablespace Size

    4KB

    16TB

    8KB

    32TB

    16KB

    64TB

    32KB

    128TB

    64KB

    256TB

    Page Sizes

    Using the innodb_page_size system variable, you can configure the size in bytes for InnoDB pages. Pages default to 16KB. There are certain limitations on how you use this variable.

    • MariaDB instances using one page size cannot use data files or log files from an instance using a different page size.

    • When using a Page Size of 4KB or 8KB, the maximum index key length is lowered proportionately.

    InnoDB Page Size
    Index Key Length

    4KB

    768B

    8KB

    1536B

    16KB

    3072B

    Limitations on Tables

    InnoDB has the following table-specific limitations.

    • When you issue a DELETE statement, InnoDB doesn't regenerate the table, rather it deletes each row from the table one by one.

    • When running MariaDB on Windows, InnoDB stores databases and tables in lowercase. When moving databases and tables in a binary format from Windows to a Unix-like system or from a Unix system to Windows, you need to rename these to use lowercase.

    • When using cascading foreign keys, operations in the cascade don't activate triggers.

    Table Analysis

    When running ANALYZE TABLE twice on a table in which statements or transactions are running, MariaDB blocks the second ANALYZE TABLE until the statement or transaction is complete. This occurs because the statement or transaction blocks the second ANALYZE TABLE statement from reloading the table definition, which it must do since the old one was marked as obsolete after the first statement.

    Table Status

    SHOW TABLE STATUS statements do not provide accurate statistics for InnoDB, except for the physical table size.

    The InnoDB storage engine does not maintain internal row counts. Transactions isolate writes, which means that concurrent transactions will not have the same row counts.

    Auto-incrementing Columns

    • When defining an index on an auto-incrementing column, it must be defined in a way that allows the equivalent of SELECT MAX(col) lookups on the table.

    • Restarting MariaDB may cause InnoDB to reuse old auto-increment values, such as in the case of a transaction that was rolled back.

    • When auto-incrementing columns run out of values, INSERT statements generate duplicate-key errors.

    Transactions and Locks

    • You can modify data on a maximum of 96 * 1023 concurrent transactions that generate undo records.

    • Of the 128 rollback segments, InnoDB assigns 32 to non-redo logs for transactions that modify temporary tables and related objects, reducing the maximum number of concurrent data-modifying transactions to 96,000, from 128.000.

    • The limit is 32,000 concurrent transactions when all data-modifying transactions also modify temporary tables.

    • Issuing a LOCK TABLES statement sets two locks on each table when the innodb_table_locks system variable is enabled (the default).

    • When you commit or roll back a transaction, any locks set in the transaction are released. You don't need to issue statements when the variable is enabled, as InnoDB would immediately release the table locks.

    This page is licensed: CC BY-SA / Gnu FDL

    InnoDB storage engine
    virtual generated columns
    Implementation Details

    Before a row is modified, a diff is copied into the undo log. Each normal row contains a pointer to the most recent version of the same row in the undo log. Each row in the undo log contains a pointer to previous version, if any. So, each modified row has a history chain.

    Rows are never physically deleted until a transaction ends. If they were deleted, the restore in ROLLBACK would be impossible. Thus, rows are simply marked for deletion.

    Each transaction uses a view of the records. The transaction isolation level determines how this view is created. For example, READ UNCOMMITTED usually uses the current version of rows, even if they are not committed (dirty reads). Other isolation levels require that the most recent committed version of rows is searched in the undo log. READ COMMITTED uses a different view for each table, while REPEATABLE READ and SERIALIZABLE use the same view for all tables.

    There is also a global history list of the data. When a transaction is committed, its history is added to this history list. The order of the list is the chronological order of the commits.

    The purge thread deletes the rows in the undo log which are not needed by any existing view. The rows for which a most recent version exists are deleted, as well as the delete-marked rows.

    If InnoDB needs to restore an old version, it will simply replace the newer version with the older one. When a transaction inserts a new row, there is no older version. However, in that case, the restore can be done by deleting the inserted rows.

    Effects of Long-Running Transactions

    Understanding how the undo log works helps with understanding the negative effects long transactions.

    • Long transactions generate several old versions of the rows in the undo log. Those rows will probably be needed for a longer time, because other long transactions will need them. Since those transactions will generate more modified rows, a sort of combinatorial explosion can be observed. Thus, the undo log requires more space.

    • Transaction may need to read very old versions of the rows in the history list, thus their performance will degrade.

    Of course read-only transactions do not write more entries in the undo log; however, they delay the purging of existing entries.

    Also, long transactions can more likely result in deadlocks, but this problem is not related to the undo log.

    Feature Summary

    Feature
    Detail
    Resources

    Transaction Log

    InnoDB Undo Log

    Storage Engine

    InnoDB

    Purpose

    Multi-Version Concurrency Control (MVCC)

    Availability

    All ES and CS versions

    Configuration

    System variables affecting undo logs include:

    • innodb_max_undo_log_size

    • innodb_undo_directory

    • innodb_undo_log_truncate

    • innodb_undo_logs

    The undo log is not a log file that can be viewed on disk in the usual sense, such as the error log or slow query log, but rather an area of storage.

    Before , the undo log is usually part of the physical system tablespace, but from , the innodb_undo_directory and innodb_undo_tablespaces system variables can be used to split into different tablespaces and store in a different location (perhaps on a different storage device). From , multiple undo tablespaces are enabled by default, and the innodb_undo_tablespaces default is changed to 3 so that the space occupied by possible bursts of undo log records can be reclaimed after innodb_undo_log_truncate is set.

    Each insert or update portion of the undo log is known as a rollback segment. The innodb_undo_logs system variable allowed to reduce the number of rollback segments from the usual 128, to limit the number of concurrently active write transactions. innodb_undo_logs was deprecated and ignored in and removed in MariaDB 10.6, as it always makes sense to use the maximum number of rollback segments.

    The related innodb_available_undo_logs status variable stores the total number of available InnoDB undo logs.

    This page is licensed: CC BY-SA / Gnu FDL

    transaction
    InnoDB

    Boolean

    Enable redo log scrubbing. Deprecated and ignored.

    innodb-scrub-log-speed

    Bytes/sec

    Redo log scrubbing speed in bytes/sec. Deprecated and ignored.

    innodb-background-scrub-data-check-interval
    innodb-background-scrub-data-compressed
    innodb-background-scrub-data-interval
    innodb-background-scrub-data-uncompressed
    innodb-immediate-scrub-data-uncompressed
    innodb-scrub-log
    INSTALL SONAME
    INSTALL PLUGIN
    --plugin-load
    --plugin-load-add
    mysqld
    option group
    option file
    UNINSTALL SONAME
    UNINSTALL PLUGIN
    --plugin-load
    --plugin-load-add
    option group
    option file
    INSERT
    UPDATE
    DELETE
    UPDATE
    DELETE
    Binary Log Formats
    triggers
    triggers
    INSERT
    Triggers
    UPDATE
    DELETE
    Triggers
    InnoDB
    InnoDB
    virtual columns
    AUTO_INCREMENT
    INSERT
    SET
    INSERT_ID
    $ sudo ln -s /..path to../ha_connect.so /usr/lib64/libconnect.so
    innodb_autoinc_lock_mode
    InnoDB
    InnoDB
    INSERT
    innodb_autoinc_lock_mode
    InnoDB
    InnoDB
    INSERT
    LOAD DATA
    INSERT ... SELECT
    INSERT
    innodb_autoinc_lock_mode
    InnoDB
    InnoDB
    statement-based
    AUTO_INCREMENT
    InnoDB
    ALTER TABLE
    AUTO_INCREMENT
    InnoDB
    AUTO_INCREMENT
    AUTO_INCREMENT
    INSERT IGNORE
    ROLLBACK
    ROLLBACK TO SAVEPOINT
    AUTO_INCREMENT
    INSERT IGNORE
    AUTO_INCREMENT
    AUTO_INCREMENT FAQ
    LAST_INSERT_ID
    Sequences
    Memory
    MyISAM
    MyRocks
    S3
    Spider

    Aria Storage Engine

    An overview of Aria, a storage engine designed as a crash-safe alternative to MyISAM, featuring transactional capabilities and improved caching.

    The Aria storage engine is compiled in by default from and it is required to be 'in use' when MariaDB is started.

    All system tables are Aria.

    Additionally, internal on-disk tables are in the Aria table format instead of the MyISAM table format. This should speed up some GROUP BY and DISTINCT queries because Aria has better caching than MyISAM.

    Note: The Aria storage engine was previously called Maria (see The Aria Name for details on the rename) and in previous versions of MariaDB the engine was still called Maria.

    The following table options to Aria tables in CREATE TABLE and ALTER TABLE:

    • TRANSACTIONAL= 0 | 1 : If the TRANSACTIONAL table option is set for an Aria table, then the table are crash-safe. This is implemented by logging any changes to the table to Aria's transaction log, and syncing those writes at the end of the statement. This will marginally slow down writes and updates. However, the benefit is that if the server dies before the statement ends, all non-durable changes will roll back to the state at the beginning of the statement. This also needs up to 6 bytes more for each row and key to store the transaction id (to allow concurrent insert's and selects).

      • TRANSACTIONAL=1 is not supported for partitioned tables.

      • An Aria table's default value for the TRANSACTIONAL table option depends on the table's value for the ROW_FORMAT table option. See below for more details.

      • If the TRANSACTIONAL table option is set for an Aria table, the table does not actually support transactions. See for more information. In this context, transactional just means crash-safe.

    • PAGE_CHECKSUM= 0 | 1 : If index and data should use page checksums for extra safety.

    • TABLE_CHECKSUM= 0 | 1 : Same as CHECKSUM in MySQL 5.1

    • ROW_FORMAT=PAGE | FIXED | DYNAMIC : The table's .

      • The default value is PAGE.

      • To emulate MyISAM, set ROW_FORMAT=FIXED or ROW_FORMAT=DYNAMIC

    The TRANSACTIONAL and ROW_FORMAT table options interact as follows:

    • If TRANSACTIONAL=1 is set, then the only supported row format is PAGE. If ROW_FORMAT is set to some other value, then Aria issues a warning, but still forces the row format to be PAGE.

    • If TRANSACTIONAL=0 is set, then the table are not be crash-safe, and any row format is supported.

    Some other improvements are:

    • now ignores values in NULL fields. This makes CHECKSUM TABLE faster and fixes some cases where same table definition could give different checksum values depending on . The disadvantage is that the value is now different compared to other MySQL installations. The new checksum calculation is fixed for all table engines that uses the default way to calculate and MyISAM which does the calculation internally. Note: Old MyISAM tables with internal checksum returns the same checksum as before. To fix them to calculate according to new rules you have to do an . You can use the old ways to calculate checksums by using the option --old to mariadbdmysqld or set the system variable '@@old' to 1 when you do CHECKSUM TABLE ... EXTENDED;

    Startup Options for Aria

    For a full list, see .

    In normal operations, the only variables you have to consider are:

      • This is where all index and data pages are cached. The bigger this is, the faster Aria will work.

      • The default value 8192, should be ok for most cases. The only problem with a higher value is that it takes longer to find a packed key in the block as one has to search roughly 8192/2 to find each key. We plan to fix this by adding a dictionary at the end of the page to be able to do a binary search within the block before starting a scan. Until this is done and key lookups takes too long time even if you are not hitting disk, then you should consider making this smaller.

    Aria Log Files

    aria_log_control file is a very short log file (52 bytes) that contains the current state of all Aria tables related to logging and checkpoints. In particular, it contains the following information:

    • The uuid is a unique identifier per system. All Aria files created will have a copy of this in their .MAI headers. This is mainly used to check if someone has copied an Aria file between MariaDB servers.

    • last_checkpoint_lsn and last_log_number are information about the current aria_log files.

    • trid is the highest transaction number seen so far. Used by recovery.

    aria_log.* files contain the log of all operations that change Aria files (including create table, drop table, insert etc..) This is a 'normal' WAL (Write Ahead Log), similar to the InnoDB log file, except that aria_logs contain both redo and undo. Old aria_log files are automatically deleted when they are not needed anymore (Neither the last checkpoint or any running transaction need to refer to the old data anymore).

    Missing valid id

    The error Missing valid id at start of file. File is not a valid aria control file means that something overwrote at least the first 4 bytes in the file. This can happen due to a problem with the file system (hardware or software), or a bug in which a thread inside MariaDB wrote on the wrong file descriptor (in which case you should , attaching a copy of the control file to assist).

    In the case of a corrupted log file, with the server shut down, one should be able to fix that by deleting all aria_log files. If the control_file is corrupted, then one has to delete the aria_control_file and all aria_log.* files. The effect of this is that on table open of an Aria table, the server will think that it has been moved from another system and do an automatic check and repair of it. If there was no issues, the table are opened and can be used as normal. See also .

    See Also

    This page is licensed: CC BY-SA / Gnu FDL

    CONNECT Table Types Overview

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    CONNECT can handle very many table formats; it is indeed one of its main features. The Type option specifies the type and format of the table. The Type options available values and their descriptions are listed in the following table:

    Type
    Description

    Binary file with numeric values in platform representation, also with columns at fixed offset within records and fixed record length.

    Catalog Tables

    For all table types marked with a '*' in the table above, CONNECT is able to analyze the data source to retrieve the column definition. This can be used to define a “catalog” table that display the column description of the source, or to create a table without specifying the column definition that are automatically constructed by CONNECT when creating the table.

    When marked with a ‘$’ the file can be the result returned by a REST query.

    This page is licensed: GPLv2

    Using CONNECT - Indexing

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Indexing is one of the main ways to optimize queries. Key columns, in particular when they are used to join tables, should be indexed. But what should be done for columns that have only few distinct values? If they are randomly placed in the table they should not be indexed because reading many rows in random order can be slower than reading the entire table sequentially. However, if the values are sorted or clustered, indexing can be acceptable because CONNECT indexes store the values in the order they appear into the table and this will make retrieving them almost as fast as reading them sequentially.

    CONNECT provides four indexing types:

    1. Standard Indexing

    2. Block Indexing

    3. Remote Indexing

    4. Dynamic Indexing

    Standard Indexing

    CONNECT standard indexes are created and used as the ones of other storage engines although they have a specific internal format. The CONNECT handler supports the use of standard indexes for most of the file based table types.

    You can define them in the statement, or either using the CREATE INDEX statement or the statement. In all cases, the index files are automatically made. They can be dropped either using the statement or the statement, and this erases the index files.

    Indexes are automatically reconstructed when the table is created, modified by INSERT, UPDATE or DELETE commands, or when the SEPINDEX option is changed. If you have a lot of changes to do on a table at one moment, you can use table locking to prevent indexes to be reconstructed after each statement. The indexes are reconstructed when unlocking the table. For instance:

    If a table was modified by an external application that does not handle indexing, the indexes must be reconstructed to prevent returning false or incomplete results. To do this, use the command.

    For outward tables, index files are not erased when dropping the table. This is the same as for the data file and preserves the possibility of several users using the same data file via different tables.

    Unlike other storage engines, CONNECT constructs the indexes as files that are named by default from the data file name, not from the table name, and located in the data file directory. Depending on the SEPINDEX table option, indexes are saved in a unique file or in separate files (if SEPINDEX is true). For instance, if indexes are in separate files, the primary index of the table_dept.dat_ of type DOS is a file named dept_PRIMARY.dnx. This makes possible to define several tables on the same data file, with eventual different options such as mapped or not mapped, and to share the index files as well.

    If the index file should have a different name, for instance because several tables are created on the same data file with different indexes, specify the base index file name with the XFILE_NAME option.

    Note1: Indexed columns must be declared NOT NULL; CONNECT doesn't support indexes containing null values.

    Note 2: MRR is used by standard indexing if it is enabled.

    Note 3: Prefix indexing is not supported. If specified, the CONNECT engine ignores the prefix and builds a whole index.

    Handling index errors

    The way CONNECT handles indexing is very specific. All table modifications are done regardless of indexing. Only after a table has been modified, or when anOPTIMIZE TABLE command is sent are the indexes made. If an error occurs, the corresponding index is not made. However, CONNECT being a non-transactional engine, it is unable to roll back the changes made to the table. The main causes of indexing errors are:

    • Trying to index a nullable column. In this case, you can alter the table to declare the column as not nullable or, if the column is nullable indeed, make it not indexed.

    • Entering duplicate values in a column indexed by a unique index. In this case, if the index was wrongly declared as unique, alter is declaration to reflect this. If the column should really contain unique values, you must manually remove or update the duplicate values.

    In both cases, after correcting the error, remake the indexes with the command.

    Index file mapping

    To accelerate the indexing process, CONNECT makes an index structure in memory from the index file. This can be done by reading the index file or using it as if it was in memory by “file mapping”. On enabled versions, file mapping is used according to the boolean system variable. Set it to 0 (file read) or 1 (file mapping).

    Block Indexing

    To accelerate input/output, CONNECT uses when possible a read/write mode by blocks of n rows, n being the value given in the BLOCK _ SIZE option of the Create Table, or a default value depending on the table type. This is automatic for fixed files (, , or ), but must be specified for variable files ( , or ).

    For blocked tables, further optimization can be achieved if the data values for some columns are “clustered” meaning that they are not evenly scattered in the table but grouped in some consecutive rows. Block indexing permits to skip blocks in which no rows fulfill a conditional predicate without having even to read the block. This is true in particular for sorted columns.

    You indicate this when creating the table by using the DISTRIB =d column option. The enum value d can be scattered, clustered, or sorted. In general only one column can be sorted. Block indexing is used only for clustered and sorted columns.

    Difference between standard indexing and block indexing

    • Block indexing is internally handled by CONNECT while reading sequentially a table data. This means in particular that when standard indexing is used on a table, block indexing is not used.

    • In a query, only one standard index can be used. However, block indexing can combine the restrictions coming from a where clause implying several clustered/sorted columns.

    • The block index files are faster to make and much smaller than standard index files.

    Notes for this Release:

    • On all operations that create or modify a table, CONNECT automatically calculates or recalculates and saves the mini/maxi or bitmap values for each block, enabling it to skip block containing no acceptable values. In the case where the optimize file does not correspond anymore to the table, because it has been accidentally destroyed, or because some column definitions have been altered, you can use the OPTIMIZE TABLE command to reconstruct the optimization file.

    • Sorted column special processing is currently restricted to ascending sort. Column sorted in descending order must be flagged as clustered. Improper sorting is not checked in Update or Insert operations but is flagged when optimizing the table.

    • Block indexing can be done in two ways. Keeping the min/max values existing for each block, or keeping a bitmap allowing knowing what column distinct values are met in each blocks. This second ways often gives a better optimization, except for sorted columns for which both are equivalent. The bitmap approach can be done only on columns having not too many distinct values. This is estimated by the MAX _ DIST option value associated to the column when creating the table. Bitmap block indexing are used if this number is not greater than the MAXBMP setting for the database.

    Remote Indexing

    Remote indexing is specific to the table type. It is equivalent to what the storage does. A MYSQL table does not support indexes per se. Because access to the table is handled remotely, it is the remote table that supports the indexes. What the MYSQL table does is just to add a WHERE clause to the command sent to the remote server allowing the remote server to use indexing when applicable. Note however that because CONNECT adds when possible all or part of the where clause of the original query, this happens often even if the remote indexed column is not declared locally indexed. The only, but very important, case a column should be locally declared indexed is when it is used to join tables. Otherwise, the required where clause would not be added to the sent SELECT query.

    See for more.

    Dynamic Indexing

    An indexed created as “dynamic” is a standard index which, in some cases, can be reconstructed for a specific query. This happens in particular for some queries where two tables are joined by an indexed key column. If the “from” table is big and the “to” big table reduced in size because of a where clause, it can be worthwhile to reconstruct the index on this reduced table.

    Because of the time added by reconstructing the index, this is valuable only if the time gained by reducing the index size if more than this reconstruction time. This is why this should not be done if the “from” table is small because there will not be enough row joining to compensate for the additional time. Otherwise, the gain of using a dynamic index is:

    • Indexing time is a little faster if the index is smaller.

    • The join process will return only the rows fulfilling the where clause.

    • Because the table is read sequentially when reconstructing the index there no need for MRR.

    • Constructing the index can be faster if the table is reduced by block indexing.

    This last point is particularly important. It means that after the index is reconstructed, the join is done on a temporary memory table.

    Unfortunately, storage engines being called independently by MariaDB for each table, CONNECT has no global information to decide when it is good to use dynamic indexing. This is why you should use it only on cases where you see that some important join queries take a very long time and only on columns used for joining the table. How to declare an index to be dynamic is by using the Boolean DYNAM index option. For instance, the query:

    Such a query joining the diag table to the patients table may last a very long time if the tables are big. To declare the primary key on the pnb column of the patients table to be dynamic:

    Note 1: The comment is not mandatory here but useful to see that the index is dynamic if you use the command.

    Note 2: There is currently no way to just change the DYNAM option without dropping and adding the index. This is unfortunate because it takes time.

    Virtual Indexing

    It applies only to the virtual tables of type and must be made on a column specifying SPECIAL=ROWID or SPECIAL=ROWNUM.

    This page is licensed: GPLv2

    CONNECT Table Types - VIR

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    VIR Type

    A VIR table is a virtual table having only Special or Virtual columns. Its only property is its “size”, or cardinality, meaning the number of virtual rows it contains. It is created using the syntax:

    The optional BLOCK_SIZE option gives the size of the table, defaulting to 1 if not specified. When its columns are not specified, it is almost equivalent to a table “seq_1_to_Size”.

    Displaying constants or expressions

    Many DBMS use a no-column one-line table to do this, often call “dual”. MySQL and MariaDB use syntax where no table is specified. With CONNECT, you can achieve the same purpose with a virtual table, with the noticeable advantage of being able to display several lines:

    This will return:

    what
    value

    What happened here? First of all, unlike Oracle “dual” tableS that have no columns, a MariaDB table must have at least one column. By default, CONNECT creates VIR tables with one special column. This can be seen with the SHOW CREATE TABLE statement:

    This special column is called “n” and its value is the row number starting from 1. It is purely a virtual table and no data file exists corresponding to it and to its index. It is possible to specify the columns of a VIR table but they must be CONNECT special columns or virtual columns. For instance:

    This table shows the sum and the sum of the square of the n first integers:

    n
    sig1
    sig2

    Note that the size of the table can be made very big as there no physical data. However, the result should be limited in the queries. For instance:

    Such a query could last very long if the rowid column were not indexed. Note that by default, CONNECT declares the “n” column as a primary key. Actually, VIR tables can be indexed but only on the ROWID (or ROWNUM) columns of the table. This is a virtual index for which no data is stored.

    Generating a Table filled with constant values

    An interesting use of virtual tables, which often cannot be achieved with a table of any other type, is to generate a table containing constant values. This is easily done with a virtual table. Let us define the table FILLER as:

    Here we choose a size larger than the biggest table we want to generate. Later if we need a table pre- filled with default and/or null values, we can do for example:

    This will generate a table having 10000 rows that can be updated later when needed. Note that a table could have been used here instead of FILLING .

    VIR tables vs. SEQUENCE tables

    With just its default column, a VIR table is almost equivalent to a table. The syntax used is the main difference, for instance:

    can be obtained with a VIR table (of size >= 15) by:

    Therefore, the main difference is to be able to define the columns of VIR tables. Unfortunately, there are currently many limitations to virtual columns that hopefully should be removed in the future.

    This page is licensed: CC BY-SA / Gnu FDL

    CONNECT INI Table Type

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Overview

    The INI type is one of the configuration or initialization files often found on Windows machines. For instance, let us suppose you have the following contact file contact.ini:

    OEM Table Example

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    This is an example showing how an OEM table can be implemented.

    The header File my_global.h:

    Note: This is a fake my_global.h that just contains what is useful for the jmgoem.cppsource file.

    The source File jmgoem.cpp

    CONNECT Zipped File Tables

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Connect can work on table files that are compressed in one or several zip files.

    The specific options used when creating tables based on zip files are:

    Table Option
    Type
    Description

    InnoDB Purge

    The purge process is a garbage collection mechanism that removes old row versions from the undo log that are no longer required for MVCC.

    When a transaction updates a row in an InnoDB table, InnoDB's MVCC implementation keeps old versions of the row in the . The old versions are kept at least until all transactions older than the transaction that updated the row are no longer open. At that point, the old versions can be deleted. InnoDB has purge process that is used to delete these old versions.

    InnoDB Purge Threads

    In MariaDB Enterprise Server, the InnoDB storage engine uses Purge Threads to perform garbage collection in the background. The Purge Threads are related to multi-version concurrency control (MVCC).

    The Purge Threads perform garbage collection of various items:

    total_count = number_of_IO_threads * 256
    CREATE TABLE webusers (
      id BIGINT(2) NOT NULL,
      name CHAR(24) NOT NULL,
      username CHAR(16) NOT NULL,
      email CHAR(25) NOT NULL,
      address VARCHAR(256) DEFAULT NULL,
      phone CHAR(21) NOT NULL,
      website CHAR(13) NOT NULL,
      company VARCHAR(256) DEFAULT NULL
    ) ENGINE=CONNECT DEFAULT CHARSET=utf8
    TABLE_TYPE=JSON FILE_NAME='users.json' HTTP='http://jsonplaceholder.typicode.com' URI='/users';
    CREATE TABLE webusers
    ENGINE=CONNECT DEFAULT CHARSET=utf8
    TABLE_TYPE=JSON FILE_NAME='users.json'
    HTTP='http://jsonplaceholder.typicode.com' URI='/users';
    SELECT name, address FROM webusers2 LIMIT 1;
    CREATE OR REPLACE TABLE webusers
    ENGINE=CONNECT DEFAULT CHARSET=utf8
    TABLE_TYPE=JSON FILE_NAME='users.json'
    HTTP='http://jsonplaceholder.typicode.com' URI='/users'
    OPTION_LIST='Depth=2';
    CREATE TABLE `webusers3` (
      `id` BIGINT(2) NOT NULL,
      `name` CHAR(24) NOT NULL,
      `username` CHAR(16) NOT NULL,
      `email` CHAR(25) NOT NULL,
      `address_street` CHAR(17) NOT NULL `JPATH`='$.address.street',
      `address_suite` CHAR(9) NOT NULL `JPATH`='$.address.suite',
      `address_city` CHAR(14) NOT NULL `JPATH`='$.address.city',
      `address_zipcode` CHAR(10) NOT NULL `JPATH`='$.address.zipcode',
      `address_geo_lat` CHAR(8) NOT NULL `JPATH`='$.address.geo.lat',
      `address_geo_lng` CHAR(9) NOT NULL `JPATH`='$.address.geo.lng',
      `phone` CHAR(21) NOT NULL,
      `website` CHAR(13) NOT NULL,
      `company_name` CHAR(18) NOT NULL `JPATH`='$.company.name',
      `company_catchPhrase` CHAR(40) NOT NULL `JPATH`='$.company.catchPhrase',
      `company_bs` VARCHAR(36) NOT NULL `JPATH`='$.company.bs'
    ) ENGINE=CONNECT DEFAULT CHARSET=utf8 `TABLE_TYPE`='JSON' `FILE_NAME`='users.json' `OPTION_LIST`='Depth=2' `HTTP`='http://jsonplaceholder.typicode.com' `URI`='/users';
    SELECT name, address_city city, company_name company FROM webusers3;
    INSTALL SONAME 'ha_blackhole';
    [mariadb]
    ...
    plugin_load_add = ha_blackhole
    UNINSTALL SONAME 'ha_blackhole';
    CREATE TABLE table_name (
       id INT UNSIGNED PRIMARY KEY NOT NULL,
       v VARCHAR(30)
    ) ENGINE=BLACKHOLE;
    
    INSERT INTO table_name VALUES (1, 'bob'),(2, 'jane');
    
    SELECT * FROM table_name;
    Empty set (0.001 sec)
    $ makdir oem
    $ cd oem
    $ makedir Release
    $ make -f oemrest.mak
    $ sudo cp rest.so /usr/local/mysql/lib/plugin
    #LINUX
    CPP = g++
    LD = g++
    OD = ./Release/
    SD = /home/olivier/MariaDB/server/storage/connect/
    CD =/usr/lib64
    # flags to compile object files that can be used in a dynamic library
    CFLAGS= -Wall -c -O3 -std=c++11 -fPIC -fno-rtti -I$(SD) -DXML_SUPPORT
    # Replace -03 by -g for debug
    LDFLAGS = -L$(CD) -lcpprest -lconnect
    
    # Flags to create a dynamic library.
    DYNLINKFLAGS = -shared
    # on some platforms, use '-G' instead.
    
    # REST library's archive file
    OEMREST = rest.so
    
    SRCS_CPP = $(SD)tabrest.cpp $(SD)restget.cpp
    OBJS_CPP = $(OD)tabrest.o $(OD)restget.o
    
    # top-level rule
    all: $(OEMREST)
    
    $(OEMREST): $(OBJS_CPP)
      $(LD) $(OBJS_CPP) $(LDFLAGS) $(DYNLINKFLAGS) -o $@
    
    #CPP Source files
    $(OD)tabrest.o:   $(SD)tabrest.cpp   $(SD)mini-global.h $(SD)global.h $(SD)plgdbsem.h $(SD)xtable.h $(SD)filamtxt.h $(SD)plgxml.h $(SD)tabdos.h  $(SD)tabfmt.h $(SD)tabjson.h $(SD)tabrest.h $(SD)tabxml.h
      $(CPP) $(CFLAGS) -o $@ $(SD)$(*F).cpp
    $(OD)restget.o:   $(SD)restget.cpp   $(SD)mini-global.h $(SD)global.h
      $(CPP) $(CFLAGS) -o $@ $(SD)$(*F).cpp
    
    # clean everything
    clean:
      $(RM) $(OBJS_CPP) $(OEMREST)
    OPTION_LIST=’Ftype=XML’
    CREATE  TABLE webw
    ENGINE=CONNECT TABLE_TYPE=OEM MODULE='Rest.dll' SUBTYPE=REST
    FILE_NAME='weatherdata.xml'
    HTTP='https://samples.openweathermap.org/data/2.5/forecast?q=London,us&mode=xml&appid=b6907d289e10d714a6e88b30761fae22'
    OPTION_LIST='Ftype=XML,Depth=3,Rownode=weatherdata';
    ALTER TABLE tab AUTO_INCREMENT=100;
    CREATE TABLE t1 (pk INT AUTO_INCREMENT PRIMARY KEY, i INT, UNIQUE (i)) ENGINE=InnoDB;
    
    INSERT INTO t1 (i) VALUES (1),(2),(3);
    INSERT IGNORE INTO t1 (pk, i) VALUES (100,1);
    Query OK, 0 rows affected, 1 warning (0.099 sec)
    
    SELECT * FROM t1;
    +----+------+
    | pk | i    |
    +----+------+
    |  1 |    1 |
    |  2 |    2 |
    |  3 |    3 |
    +----+------+
    
    SHOW CREATE TABLE t1\G
    *************************** 1. row ***************************
           Table: t1
    Create Table: CREATE TABLE `t1` (
      `pk` int(11) NOT NULL AUTO_INCREMENT,
      `i` int(11) DEFAULT NULL,
      PRIMARY KEY (`pk`),
      UNIQUE KEY `i` (`i`)
    ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1
    # Restart server
    SHOW CREATE TABLE t1\G
    *************************** 1. row ***************************
           Table: t1
    Create Table: CREATE TABLE `t1` (
      `pk` int(11) NOT NULL AUTO_INCREMENT,
      `i` int(11) DEFAULT NULL,
      PRIMARY KEY (`pk`),
      UNIQUE KEY `i` (`i`)
    ) ENGINE=InnoDB AUTO_INCREMENT=101 DEFAULT CHARSET=latin1
    CREATE TABLE name [coldef] ENGINE=CONNECT TABLE_TYPE=VIR
    [BLOCK_SIZE=n];

    Chelsey Dietrich

    Roscoeview

    Keebler LLC

    Mrs. Dennis Schulist

    South Christy

    Considine-Lockman

    Kurtis Weissnat

    Howemouth

    Johns Group

    Nicholas Runolfsdottir V

    Aliyaview

    Abernathy Group

    Glenna Reichert

    Bartholomebury

    Yost and Sons

    Clementina DuBuque

    Lebsackbury

    Hoeger LLC

    LOCK TABLES
    autocommit
    CSV and FMT Table Types
    ALTER TABLE

    BSON

    (Temporary) JSON table handled by the new JSON handling.

    CSV*$

    "Comma Separated Values" file in which each variable length record contains column values separated by a specific character (defaulting to the comma)

    DBF*

    File having the dBASE format.

    DOS

    The table is contained in one or several files. The file format can be refined by some other options of the command or more often using a specific type as many of those described below. Otherwise, it is a flat text file where columns are placed at a fixed offset within each record, the last column being of variable length.

    DIR

    Virtual table that returns a file list like the Unix ls or DOS dir command.

    FIX

    Text file arranged like DOS but with fixed length records.

    FMT

    File in which each record contains the column values in a non-standard format (the same for each record) This format is specified in the column definition.

    INI

    File having the format of the initialization or configuration files used by many applications.

    JDBC*

    Table accessed via a JDBC driver.

    JSON*$

    File having the JSON format.

    MAC

    Virtual table returning information about the machine and network cards (Windows only).

    MONGO*

    Table accessed via the MongoDB C Driver API.

    MYSQL*

    Table accessed using the MySQL API like the FEDERATED engine.

    OCCUR*

    A table based on another table existing on the current server, several columns of the object table containing values that can be grouped in only one column.

    ODBC*

    Table extracted from an application accessible via ODBC or unixODBC. For example from another DBMS or from an Excel spreadsheet.

    OEM*

    Table of any other formats not directly handled by CONNECT but whose access is implemented by an external FDW (foreign data wrapper) written in C++ (as a DLL or Shared Library).

    PIVOT*

    Used to "pivot" the display of an existing table or view.

    PROXY*

    A table based on another table existing on the current server.

    TBL*

    Accessing a collection of tables as one table (like the MERGE engine does for MyIsam tables)

    VEC

    Binary file organized in vectors, in which column values are grouped consecutively, either split in separate files or in a unique file.

    VIR*

    Virtual table containing only special and virtual columns.

    WMI*

    Windows Management Instrumentation table displaying information coming from a WMI provider. This type enables to get in tabular format all sorts of information about the machine hardware and operating system (Windows only).

    XCOL*

    A table based on another table existing on the current server with one of its columns containing comma separated values.

    XML*$

    File having the XML or HTML format.

    ZIP

    Table giving information about the contents of a zip file.

    BIN
  • CONNECT cannot perform block indexing on case insensitive character columns. To force block indexing on a character column, specify its charset as not case insensitive, for instance as binary. However this will also apply to all other clauses, this column being now case sensitive.

  • While constructing the index, CONNECT also stores in memory the values of other used columns.

    CREATE TABLE
    ALTER TABLE
    DROP INDEX
    ALTER TABLE
    OPTIMIZE TABLE
    OPTIMIZE TABLE
    connect_indx_map
    FIX
    BIN
    DBF
    VEC
    DOS
    CSV
    FMT
    MYSQL
    FEDERATED
    SELECT
    Indexing of MYSQL tables
    SHOW INDEX
    VIR

    2.6457513110645907

    The square root of 8 is

    2.8284271247461903

    The square root of 9 is

    3.0000000000000000

    The square root of 10 is

    3.1622776601683795

    1000

    500500

    333833500

    The square root of 1 is

    1.0000000000000000

    The square root of 2 is

    1.4142135623730951

    The square root of 3 is

    1.7320508075688772

    The square root of 4 is

    2.0000000000000000

    The square root of 5 is

    2.2360679774997898

    The square root of 6 is

    2.4494897427831779

    996

    496506

    329845486

    997

    497503

    330839495

    998

    498501

    331835499

    999

    499500

    332833500

    SEQUENCE
    SEQUENCE
    SEQUENCE

    The square root of 7 is

    Boolean

    Required to be set as true.

    ENTRY*

    String

    The optional name or pattern of the zip entry or entries to be used with the table. If not specified, all entries or only the first one are used depending on the mulentries option setting.

    MULENTRIES*

    Boolean

    True if several entries are part of the table. If not specified, it defaults to false if the entry option is not specified. If the entry option is specified, it defaults to true if the entry name contains wildcard characters or false if it does not.

    APPEND*

    Boolean

    Used when creating new zipped tables (see below)

    LOAD*

    String

    Used when creating new zipped tables (see below)

    Options marked with a ‘*’ must be specified in the option list.

    Examples of use:

    Example 1: Single CSV File Included in a Single ZIP File

    Let's suppose you have a CSV file from which you would create a table by:

    If the CSV file is included in a ZIP file, the CREATE TABLE becomes:

    The file_name option is the name of the zip file. The entry option is the name of the entry inside the zip file. If there is only one entry file inside the zip file, this option can be omitted.

    Example 2: Several CSV Files Included in a Single ZIP File

    If the table is made from several files such as emp01.csv, emp02.csv, etc., the standard create table would be:

    But if these files are all zipped inside a unique zip file, it becomes:

    Here the entry option is the pattern that the files inside the zip file must match. If all entry files are ok, the entry option can be omitted but the Boolean option mulentries must be specified as true.

    Example 3: Single CSV File included in Multiple ZIP Files (Without considering subfolders)

    If the table is created on several zip files, it is specified as for all other multiple tables:

    Here again the entry option is used to restrict the entry file(s) to be used inside the zip files and can be omitted if all are ok.

    The column descriptions can be retrieved by the discovery process for table types allowing it. It cannot be done for multiple tables or multiple entries.

    A catalog table can be created by adding catfunc=columns. This can be used to show the column definitions of multiple tables. Multiple must be set to false and the column definitions are the ones of the first table or entry.

    This first implementation has some restrictions:

    1. Zipped tables are read-only. UPDATE and DELETE are not supported. However, INSERT is supported in a specific way when making tables.

    2. The inside files are decompressed into memory. Memory problems may arise with huge files.

    3. Only file types that can be handled from memory are eligible for this. This includes DOS, FIX, BIN, CSV, FMT, DBF, JSON, and XML table types, as well as types based on these such as XCOL, OCCUR and PIVOT.

    Optimization by indexing or block indexing is possible for table types supporting it. However, it applies to the uncompressed table. This means that the whole table is always uncompressed.

    Partitioning is also supported. See how to do it in the section about partitioning.

    Creating New Zipped Tables

    Tables can be created to access already existing zip files. However, is it also possible to make the zip file from an existing file or table. Two ways are available to make the zip file:

    Insert Method

    insert can be used to make the table file for table types based on records (this excludes DBF, XML and JSON when pretty is not 0). However, the current implementation of the used package (minizip) does not support adding to an already existing zip entry. This means that when executing an insert statement the inserted records are not added but replace the existing ones. CONNECT protects existing data by not allowing such inserts, Therefore, only three ways are available to do so:

    1. Using only one insert statement to make the whole table. This is possible only for small tables and is principally useful when making tests.

    2. Making the table from the data of another table. This can be done by executing an “insert into table select * from another_table” or by specifying “as select * from another_table” in the create table statement.

    3. Making the table from a file whose format enables to use the “load data infile” statement.

    To add a new entry in an existing zip file, specify “append=YES” in the option list. When inserting several entries, use ALTER to specify the required options, for instance:

    The last ALTER is needed to display all the entries.

    File Zipping Method

    This method enables to make the zip file from another file when creating the table. It applies to all table types including DBF, XML and JSON. It is specified in the create table statement with the load option:

    When executing this statement, the serv2.xml file are zipped as /perso.zip*. The entry name can be specified or defaults to the source file name.*

    If the column descriptions are specified, the table can be used later to read from the zipped table, but they are not used when creating the zip file. Thus, a fake column (there must be one) can be specified and another table created to read the zip file. This one can take advantage of the discovery process to avoid providing the columns description for table types allowing it. For instance:

    It is also possible to create a multi-entries table from several files:

    Here the files to load are specified with wildcard characters and the mulentries options must be specified. However, the entry option must not be specified, entry names are made from the file names. Provide a fake column description if the files have different column layout, but specific tables will have to be created to read each of them.

    ZIP Table Type

    A ZIP table type is also available. It is not meant to read the inside files but to display information about the zip file contents. For instance:

    This will display the name, compressed size, uncompressed size, and compress method of all entries inside the zip file. Column names are irrelevant; these are flag values that mean what information to retrieve.

    It is possible to retrieve this information from several zip files by specifying the multiple option:

    Here we added the special column zipname to get the name of the zip file for each entry.

    This page is licensed: CC BY-SA / Gnu FDL

    ZIPPED

    LOCK TABLE t1 WRITE;
    INSERT INTO t1 VALUES(...);
    INSERT INTO t1 VALUES(...);
    ...
    UNLOCK TABLES;
    SELECT d.diag, COUNT(*) cnt FROM diag d, patients p WHERE d.pnb =
    p.pnb AND ageyears < 17 AND county = 30 AND drg <> 11 AND d.diag
    BETWEEN 4296 AND 9434 GROUP BY d.diag ORDER BY cnt DESC;
    ALTER TABLE patients DROP PRIMARY KEY;
    ALTER TABLE patients ADD PRIMARY KEY (pnb) COMMENT 'DYNAMIC' dynam=1;
    CREATE TABLE virt ENGINE=CONNECT table_type=VIR block_size=10;
    SELECT concat('The square root of ', n, ' is') what,
    round(sqrt(n),16) value FROM virt;
    CREATE TABLE `virt` (
    `n` INT(11) NOT NULL `SPECIAL`=ROWID,
    PRIMARY KEY (`n`)
    ) ENGINE=CONNECT DEFAULT CHARSET=latin1 `TABLE_TYPE`='VIR'
    `BLOCK_SIZE`=10
    CREATE TABLE virt2 (
    n INT KEY NOT NULL special=ROWID,
    sig1 BIGINT AS ((n*(n+1))/2) virtual,
    sig2 BIGINT AS(((2*n+1)*(n+1)*n)/6) virtual)
    ENGINE=CONNECT table_type=VIR block_size=10000000;
    SELECT * FROM virt2 LIMIT 995, 5;
    SELECT * FROM virt2 WHERE n = 1664510;
    CREATE TABLE filler ENGINE=CONNECT table_type=VIR block_size=5000000;
    CREATE TABLE tp (
    id INT(6) KEY NOT NULL,
    name CHAR(16) NOT NULL,
    salary FLOAT(8,2));
    INSERT INTO tp SELECT n, 'unknown', NULL FROM filler WHERE n <= 10000;
    SELECT * FROM seq_100_to_150_step_10;
    SELECT n*10 FROM vir WHERE n BETWEEN 10 AND 15;
    ENGINE=connect table_type=CSV file_name='E:/Data/employee.csv'
    CREATE TABLE emp
    ... optional COLUMN definition
    sep_char=';' header=1;
    CREATE TABLE empzip
    ... optional column definition
    ENGINE=connect table_type=CSV file_name='E:/Data/employee.zip'
    sep_char=';' header=1 zipped=1 option_list='Entry=emp.csv';
    CREATE TABLE empmul (
    ... required column definition
    ) ENGINE=connect table_type=CSV file_name='E:/Data/emp*.csv' 
    sep_char=';' header=1 multiple=1;
    CREATE TABLE empzmul
    ... required column definition
    ENGINE=connect table_type=CSV file_name='E:/Data/emp.zip'
    sep_char=';' header=1 zipped=1 option_list='Entry=emp*.csv';
    CREATE TABLE zempmul (
    ... required column definition
    ) ENGINE=connect table_type=CSV file_name='E:/Data/emp*.zip' 
    sep_char=';' header=1 multiple=1 zipped=yes 
    option_list='Entry=employee.csv';
    CREATE TABLE znumul (
    Chiffre INT(3) NOT NULL,
    Lettre CHAR(16) NOT NULL)
    ENGINE=CONNECT table_type=CSV
    file_name='C:/Data/FMT/mnum.zip' header=1 lrecl=20 zipped=1
    option_list='Entry=Num1';
    INSERT INTO znumul SELECT * FROM num1;
    ALTER TABLE znumul option_list='Entry=Num2,Append=YES';
    INSERT INTO znumul SELECT * FROM num2;
    ALTER TABLE znumul option_list='Entry=Num3,Append=YES';
    INSERT INTO znumul SELECT * FROM num3;
    ALTER TABLE znumul option_list='Entry=Num*,Append=YES';
    SELECT * FROM znumul;
    CREATE TABLE XSERVZIP (
    NUMERO VARCHAR(4) NOT NULL,
    LIEU VARCHAR(15) NOT NULL,
    CHEF VARCHAR(5) NOT NULL,
    FONCTION VARCHAR(12) NOT NULL,
    NOM VARCHAR(21) NOT NULL)
    ENGINE=CONNECT table_type=XML file_name='E:/Xml/perso.zip' zipped=1
    option_list='entry=services,load=E:/Xml/serv2.xml';
    CREATE TABLE mkzq (whatever INT)
    ENGINE=connect table_type=DBF zipped=1
    file_name='C:/Data/EAUX/dbf/CQUART.ZIP'
    option_list='Load=C:/Data/EAUX/dbf/CQUART.DBF';
    CREATE TABLE zquart
    ENGINE=connect table_type=DBF zipped=1
    file_name='C:/Data/EAUX/dbf/CQUART.ZIP';
    CREATE TABLE znewcities (
      _id CHAR(5) NOT NULL,
      city CHAR(16) NOT NULL,
      lat DOUBLE(18,6) NOT NULL `FIELD_FORMAT`='loc:[0]',
      lng DOUBLE(18,6) NOT NULL `FIELD_FORMAT`='loc:[1]',
      pop INT(6) NOT NULL,
      state CHAR(2) NOT NULL
    ) ENGINE=CONNECT TABLE_TYPE=JSON FILE_NAME='E:/Json/newcities.zip' ZIPPED=1 LRECL=1000 OPTION_LIST='Load=E:/Json/city_*.json,mulentries=YES,pretty=0';
    CREATE TABLE xzipinfo2 (
    entry VARCHAR(256)NOT NULL,
    cmpsize BIGINT NOT NULL flag=1,
    uncsize BIGINT NOT NULL flag=2,
    method INT NOT NULL flag=3,
    date DATETIME NOT NULL flag=4)
    ENGINE=connect table_type=ZIP file_name='E:/Data/Json/cities.zip';
    CREATE TABLE TestZip1 (
    entry VARCHAR(260)NOT NULL,
    cmpsize BIGINT NOT NULL flag=1,
    uncsize BIGINT NOT NULL flag=2,
    method INT NOT NULL flag=4,
    date DATETIME NOT NULL flag=4,
    zipname VARCHAR(256) special='FILEID')
    ENGINE=connect table_type=ZIP multiple=1
    file_name='C:/Data/Ziptest/CCAM06300_DBF_PART*.zip';
    If TRANSACTIONAL is not set to any value, then any row format is supported. If ROW_FORMAT is set, then the table will use that row format. Otherwise, the table will use the default PAGE row format. In this case, if the table uses the PAGE row format, then it are crash-safe. If it uses some other row format, then it will not be crash-safe.

    At startup Aria will check the Aria logs and automatically recover the tables from the last checkpoint if the server was not taken down correctly. See Aria Log Files

  • Possible values to try are 2048, 4096 or 8192

  • Note that you can't change this without dumping, deleting old tables and deleting all log files and then restoring your Aria tables. (This is the only option that requires a dump and load.)

  • aria-log-purge-type

    • Set this to "at_flush" if you want to keep a copy of the transaction logs (good as an extra backup). The logs will stay around until you execute FLUSH ENGINE LOGS.

  • MDEV-21364
    row format
    CHECKSUM TABLE
    row format
    ALTER TABLE
    Aria System Variables
    aria-pagecache-buffer-size
    aria-block-size
    When is it safe to remove old log files
    Aria FAQ
    CONNECT lets you view it as a table in two different ways.

    Column layout

    The first way is to regard it as a table having one line per section, the columns being the keys you want to display. In this case, the CREATE statement could be:

    The column that will contain the section name can have any name but must specify flag=1. All other columns must have the names of the keys we want to display (case insensitive). The type can be character or numeric depending on the key value type, and the length is the maximum expected length for the key value. Once done, the statement:

    This statement will display the file in tabular format.

    contact
    name
    hired
    city
    tel

    BER

    Bertrand

    1970-01-01

    Issy-les-Mlx

    09.54.36.29.60

    WEL

    Schmitt

    1985-02-19

    Berlin

    03.43.377.360

    Only the keys defined in the create statements are visible; keys that do not exist in a section are displayed as null or pseudo null (blank for character, 1/1/70 for dates, and 0 for numeric) for columns declared NOT NULL.

    All relational operations can be applied to this table. The table (and the file) can be updated, inserted and conditionally deleted. The only constraint is that when inserting values, the section name must be the first in the list of values.

    Note 1: When inserting, if a section already exists, no new section are created but the new values are added or replace those of the existing section. Thus, the following two commands are equivalent:

    Note 2: Because sections represent one line, a DELETE statement on a section key will delete the whole section.

    Row layout

    To be a good candidate for tabular representation, an INI file should have often the same keys in all sections. In practice, many files commonly found on computers, such as the win.ini file of the Windows directory or the_my.ini_ file cannot be viewed that way because each section has different keys. In this case, a second way is to regard the file as a table having one row per section key and whose columns can be the section name, the key name, and the key value.

    For instance, let us define the table:

    In this statement, the "Layout" option sets the display format, Column by default or anything else not beginning by 'C' for row layout display. The names of the three columns can be freely chosen. The Flag option gives the meaning of the column. Specify flag=1 for the section name and flag=2 for the key name. Otherwise, the column will contain the key value.

    Once done, the command:

    Will display the following result:

    section
    keyname
    value

    BER

    name

    Bertrand

    BER

    forename

    Olivier

    BER

    address

    21 rue Ferdinand Buisson

    BER

    city

    Issy-les-Mlx

    Note: When processing an INI table, all section names are retrieved in a buffer of 8K bytes (2048 bytes before 10.0.17). For a big file having many sections, this size can be increased using for example:

    This page is licensed: CC BY-SA / Gnu FDL

    :

    The file mongo.def: (required only on Windows)

    Compiling this OEM

    To compile this OEM module, first make the two or three required files by copy/pasting from the above listings.

    Even if this module is to be used with a binary distribution, you need some source files in order to successfully compile it. At least the CONNECT header files that are included in jmgoem.cpp and the ones they can include. This can be obtained by downloading the MariaDB source file tar.gz and extracting from it the CONNECT sources files in a directory that are added to the additional source directories if it is not the directory containing the above files.

    The module must be linked to the ha_connect.lib of the binary version it will used with. Recent distributions add this lib in the plugin directory.

    The resulting module, for instance mongo.so or mongo.dll, must be placed in the plugin directory of the MariaDB server. Then, you are able to use MONGO like tables simply replacing in the CREATE TABLE statement the option TABLE_TYPE=MONGO with TABLE_TYPE=OEM SUBTYPE=MONGO MODULE=’mongo.(so|dll)’. Actually, the module name, here supposedly ‘mongo’, can be anything you like.

    This will work with the last (not yet) distributed versions of and 10.1 because, even it is not enabled, the MONGO type is included in them. This is also the case for but then, on Windows, you will have to define NOEXP and NOMGOCOL because these functions are not exported by this version.

    To implement for older versions that do not contain the MONGO type, you can add the corresponding source files, namely javaconn.cpp, jmgfam.cpp, jmgoconn.cpp, mongo.cpp and tabjmg.cpp that you should find in the CONNECT extracted source files if you downloaded a recent version. As they include my_global.h, this is the reason why the included file was named this way. In addition, your compiling should define HAVE_JMGO and HAVE_JAVACONN. Of course, this is possible only if ha_connect.lib is available.

    This page is licensed: CC BY-SA / Gnu FDL

    The Purge Threads perform garbage collection of the InnoDB Undo Log. When a row is updated in the clustered index, InnoDB updates the values in the clustered index, and the old row version is added to the Undo Log. The Purge Threads scan the Undo Log for row versions that are not needed by open transactions and permanently delete them. In ES 10.5 and later, if the remaining clustered index record is the oldest possible row version, the Purge Thread resets the record's hidden DB_TRX_ID field to 0.

  • The Purge Threads perform garbage collection of index records. When an indexed column is updated, InnoDB creates a new index record for the updated value in each affected index, and the old index records are delete-marked. When the primary key column is updated, InnoDB creates a new index record for the updated value in every index, and each old index record is delete-marked. The Purge Threads scan for delete-marked index records and permanently delete them.

  • The Purge Threads perform garbage collection of freed overflow pages. BLOB, CHAR, TEXT, VARCHAR, VARBINARY, and related types are sometimes stored on overflow pages. When the value on the overflow page is deleted or updated, the overflow page is no longer needed. The Purge Threads delete these freed overflow pages.

  • Feature Summary

    Feature
    Detail
    Resources

    Thread

    InnoDB Purge Threads

    Storage Engine

    InnoDB

    Purpose

    Garbage Collection of: • InnoDB Undo Log • Delete-marked secondary index records • Freed overflow pages

    Availability

    All ES and CS versions

    Configuring the Purge Threads

    The number of purge threads can be set by configuring the innodb_purge_threads system variable. This system variable can be specified as a command-line argument to mariadbd or it can be specified in a relevant server option group in an option file:

    Optimizing Purge Performance

    Configuring the Purge Batch Size

    The purge batch size is defined as the number of InnoDB undo log records that must be written before triggering purge. The purge batch size can be set by configuring the innodb_purge_batch_size system variable. This system variable can be specified as a command-line argument to mariadbd or it can be specified in a relevant server option group in an option file:

    Configuring the Max Purge Lag

    If purge operations are lagging on a busy server, then this can be a tough situation to recover from. As a solution, InnoDB allows you to set the max purge lag. The max purge lag is defined as the maximum number of InnoDB undo log that can be waiting to be purged from the history until InnoDB begins delaying DML statements.

    The max purge lag can be set by configuring the innodb_max_purge_lag system variable. This system variable can be changed dynamically with SET GLOBAL:

    This system variable can also be specified as a command-line argument to mariadbd or it can be specified in a relevant server option group in an option file:

    The maximum delay can be set by configuring the innodb_max_purge_lag_delay system variable. This system variable can be changed dynamically with SET GLOBAL:

    This system variable can also be specified as a command-line argument to mariadbd or it can be specified in a relevant server option group in an option file:

    Configuring the Purge Rollback Segment Truncation Frequency

    The purge rollback segment truncation frequency is defined as the number of purge loops that are run before unnecessary rollback segments are truncated. The purge rollback segment truncation frequency can be set by configuring the innodb_purge_rseg_truncate_frequency system variable. This system variable can be changed dynamically with SET GLOBAL:

    This system variable can also be specified as a command-line argument to mariadbd or it can be specified in a relevant server option group in an option file:

    Configuring the Purge Undo Log Truncation

    Purge undo log truncation occurs when InnoDB truncates an entire InnoDB undo log tablespace, rather than deleting individual InnoDB undo log records.

    Purge undo log truncation can be enabled by configuring the innodb_undo_log_truncate system variable. This system variable can be changed dynamically with SET GLOBAL:

    This system variable can also be specified as a command-line argument to mariadbd or it can be specified in a relevant server option group in an option file:

    An InnoDB undo log tablespace is truncated when it exceeds the maximum size that is configured for InnoDB undo log tablespaces. The maximum size can be set by configuring the innodb_max_undo_log_size system variable. This system variable can be changed dynamically with SET GLOBAL:

    This system variable can also be specified as a command-line argument to mariadbd or it can be specified in a relevant server option group in an option file:

    Purge's Effect on Row Metadata

    An InnoDB table's clustered index has three hidden system columns that are automatically generated. These hidden system columns are:

    • DB_ROW_ID - If the table has no other PRIMARY KEY or no other UNIQUE KEY defined as NOT NULL that can be promoted to the table's PRIMARY KEY, then InnoDB will use a hidden system column called DB_ROW_ID. InnoDB will automatically generated the value for the column from a global InnoDB-wide 48-bit sequence (instead of being table-local).

    • DB_TRX_ID - The transaction ID of either the transaction that last changed the row or the transaction that currently has the row locked.

    • DB_ROLL_PTR - A pointer to the that contains the row's previous record. The value of DB_ROLL_PTR is only valid if DB_TRX_ID belongs to the current read view. The oldest valid read view is the purge view.

    If a row's last InnoDB undo log record is purged, this can obviously effect the value of the row's DB_ROLL_PTR column, because there would no longer be any InnoDB undo log record for the pointer to reference.

    The purge process will set a row's DB_TRX_ID column to 0 after all of the row's associated InnoDB undo log records have been deleted. This change allows InnoDB to perform an optimization: if a query wants to read a row, and if the row's DB_TRX_ID column is set to 0, then it knows that no other transaction has the row locked. Usually, InnoDB needs to lock the transaction system's mutex in order to safely check whether a row is locked, but this optimization allows InnoDB to confirm that the row can be safely read without any heavy internal locking.

    This optimization can speed up reads, but it come at a noticeable cost at other times. For example, it can cause the purge process to use more I/O after inserting a lot of rows, since the value of each row's DB_TRX_ID column will have to be reset.

    This page is licensed: CC BY-SA / Gnu FDL

    InnoDB undo log

    Location

    By default, located in InnoDB system tablespace When innodb_undo_tablespaces

    innodb_undo_tablespaces is set, located in directory set by innodb_undo_directory (Defaults to datadir)

    Quantity

    Set by innodb_undo_tablespaces

    Configure the InnoDB Undo Log

    Size

    10 MB per tablespace by default (grows as needed)

    innodb_undo_tablespaces
    innodb_purge_batch_size
    innodb_purge_rseg_truncate_frequency
    MariaDB Enterprise Server
    MariaDB 10.6.6
    MariaDB 10.6.7

    CONNECT BIN Table Type

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Overview

    A table of type BIN is physically a binary file in which each row is a logical record of fixed length[1]. Within a record, column fields are of a fixed offset and length as with FIX tables. Specific to BIN tables is that numerical values are internally encoded using native platform representation, so no conversion is needed to handle numerical values in expressions.

    It is not required that the lines of a BIN file be separated by characters such as CR and/or LF but this is possible. In such an event, the lrecl option must be specified accordingly.

    Note: Unlike for the , the width of the fields is the length of their internal representation in the file. For instance for a column declared as:

    The field width in the file is 4 characters, the size of a binary integer. This is the value used to calculate the offset of the next field if it is not specified. Therefore, if the next field is placed 5 characters after this one, this declaration is not enough, and the flag option will have to be used on the next field.

    Type Conversion in BIN Tables

    Here are the correspondences between the column type and field format provided by default:

    Column type
    File default format

    However, the column type need not necessarily match the field format within the table file. In particular, this occurs for field formats that correspond to numeric types that are not handled by CONNECT[]. Indeed, BIN table files may internally contain float numbers or binary numbers of any byte length in big-endian or little-endian representation[]. Also, as in tables, you may want to handle some character fields as numeric or vice versa.

    This is why it is possible to specify the field format when it does not correspond to the column type default using the field_format column option in the statement. Here are the available field formats for BIN tables:

    Field_format
    Internal representation

    All field formats (except the first one) are a one-character specification[]. 'X' is equivalent to not specifying the field format. For the 'C' character specification, n is the column width as specified with the column type. For one-column formats, the number of bytes of the numeric fields corresponds to what it is on most platforms. However, it could vary for some. The G, I, S and T formats are deprecated because they correspond to supported data types and may not be supported in future versions.

    Example

    Here is an example of a BIN table. The file record layout is supposed to be:

    Here N represents numeric characters, C any characters, I integer bytes, S short integer bytes, and F float number bytes. The IIII field contains a date in numeric format.

    The table could be created by:

    Specifying the little-endian representation for binary values is not useful on most machines, but makes the create table statement portable on a machine using big endian, as well as the table file.

    The field offsets and the file record length are calculated according the column internal format and eventually modified by the field format. It is not necessary to specify them for a packed binary file without line endings. If a line ending is desired, specify the ending option or specify the lrecl option adding the ending width. The table can be filled by:

    Note that the types of the inserted values must match the column type, not the field format type.

    The query:

    returns:

    fig
    name
    birth
    id
    salary
    dept

    Numeric fields alignment

    In binary files, numeric fields and record length can be aligned on 4-or-8-byte boundaries to optimize performance on certain processors. This can be modified in the OPTION_LIST with an "align" option ("packed" meaning align=1 is the default).

    1. Sometimes it can be a physical record if LF or CRLF have been written in the file.

    2. Most of these are obsolete because CONNECT supports all column types except float

    3. The default endian representation used in the table file can be specified by setting the ENDIAN option as ‘L’ or ‘B’ in the option list.

    4. It can be specified with more than one character, but only the first one is significant.

    This page is licensed: CC BY-SA / Gnu FDL

    Differences Between FederatedX and Federated

    This page outlines the key enhancements in FederatedX over the original Federated engine, including support for transactions and a refactored codebase. This storage engine has been deprecated.

    This storage engine has been deprecated.

    The main differences are:

    New features in FederatedX

    • Transactions (beta feature)

    • Supports partitions (alpha feature)

    • New class structure which allows developers to write connection classes for other RDBMSs without having to modify base classes for FederatedX

    Different behavior

    • FederatedX is statically compiled into MariaDB by default.

    • When you create a table with FederatedX, the connection are tested. The CREATE will fail if MariaDB can't connect to the remote host or if the remote table doesn't exist.

    This page is licensed: CC BY-SA / Gnu FDL

    CONNECT - NoSQL Table Types

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    They are based on files that do not match the relational format but often represent hierarchical data. CONNECT can handle JSON, INI-CFG, XML, and some HTML files.

    The way it is done is different from what MySQL or PostgreSQL does. In addition to including in a table some column values of a specific data format (JSON, XML) to be handled by specific functions, CONNECT can directly use JSON, XML or INI files that are produced by other applications, and this is the table definition that describes where and how the contained information must be retrieved.

    This is also different from what MariaDB does with dynamic columns, which is close to what MySQL and PostgreSQL do with the JSON column type.

    Note: The LEVEL option used with these tables should, from Connect 1.07.0002, be specified as DEPTH. Also, what was specified with the FIELD_FORMAT column option should now also be specified using JPATH or XPATH.

    This page is licensed: CC BY-SA / Gnu FDL

    CONNECT TBL Table Type: Table List

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    This type allows defining a table as a list of tables of any engine and type. This is more flexible than multiple tables that must be all of the same file type. This type does, but is more powerful than, what is done with the engine.

    The list of the columns of the TBL table may not necessarily include all the columns of the tables of the list. If the name of some columns is different in the sub-tables, the column to use can be specified by its position given by theFLAG option of the column. If the ACCEPT option is set to true (Y or 1) columns that do not exist in some of the sub-tables are accepted and their value are null or pseudo-null (this depends on the nullability of the column) for the tables not having this column. The column types can also be different and an automatic conversion are done if necessary.

    InnoDB Introduction

    An overview of the InnoDB storage engine, detailing its support for ACID transactions, row-level locking, and crash recovery.

    Overview

    MariaDB Enterprise Server uses the InnoDB storage engine by default. InnoDB is a general purpose transactional storage engine that is performant, ACID-compliant, and well-suited for most workloads.

    Benefits

    CONNECT OCCUR Table Type

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Similarly to the table type, OCCUR is an extension to the type when referring to a table or view having several columns containing the same kind of data. It enables having a different view of the table where the data from these columns are put in a single column, eventually causing several rows to be generated from one row of the object table. For example, supposing we have a_pets_ table:

    name
    dog
    cat

    CONNECT - Using the TBL and MYSQL Table Types Together

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Used together, these types lift all the limitations of the and engines.

    MERGE: Its limitation is obvious, the merged tables must be identical tables, and MyISAM is not even the default engine for MariaDB. However, accesses a collection of CONNECT tables, but because these tables can be user specified or internally created tables, there is no limitation to the type of the tables that can be merged.

    TBL is also much more flexible. The merged tables must not be "identical", they just should have the columns defined in the TBL table. If the type of one column in a merged table is not the one of the corresponding column of the TBL table, the column value are converted. As we have seen, if one column of the TBL table of the TBL column does not exist in one of the merged table, the corresponding value are set to null. If columns in a sub-table have a different name, they can be accessed by position using the FLAG column option of CONNECT.

    Using CONNECT - Virtual and Special Columns

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    CONNECT supports MariaDB . It is also possible to declare a column as being a CONNECT special column. Let us see on an example how this can be done. The boys table we have seen previously can be recreated as:

    We have defined two CONNECT special columns. You can give them any name; it is the field SPECIAL option that specifies the special column functional name.

    Note: the default values specified for the special columns do not mean anything. They are specified just to prevent getting warning messages when inserting new rows.

    For the definition of the agehired virtual column, no CONNECT options can be specified as it has no offset or length, not being stored in the file.

    Using CONNECT - Exporting Data From MariaDB

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Exporting data from MariaDB is obviously possible with CONNECT in particular for all formats not supported by the statement. Let us consider the query:

    Supposing you want to get the result of this query into a file handlers.htm in XML/HTML format, allowing displaying it on an Internet browser, this is how you can do it:

    Just create the CONNECT table that are used to make the file:

    Here the column definition is not given and will come from the Select statement following the Create. The CONNECT options are the same we have seen previously. This will do both actions, creating the matching handlers CONNECT table and 'filling' it with the query result.

    Aria file version: 1
    Block size: 8192
    maria_uuid: ee948482-6cb7-11ed-accb-3c7c3ff16468
    last_checkpoint_lsn: (1,0x235a)
    last_log_number: 1
    trid: 28
    recovery_failures: 0
    [BER]
    name=Bertrand
    forename=Olivier
    address=21 rue Ferdinand Buisson
    city=Issy-les-Mlx
    zipcode=92130
    tel=09.54.36.29.60
    cell=06.70.06.04.16
    
    [WEL]
    name=Schmitt
    forename=Bernard
    hired=19/02/1985
    address=64 tiergarten strasse
    city=Berlin
    zipcode=95013
    tel=03.43.377.360
    
    [UK1]
    name=Smith
    forename=Henry
    hired=08/11/2003
    address=143 Blum Rd.
    city=London
    zipcode=NW1 2BP
    CREATE TABLE contact (
      contact CHAR(16) flag=1,
      name CHAR(20),
      forename CHAR(32),
      hired DATE date_format='DD/MM/YYYY',
      address CHAR(64),
      city CHAR(20),
      zipcode CHAR(8),
      tel CHAR(16))
    ENGINE=CONNECT table_type=INI file_name='contact.ini';
    SELECT contact, name, hired, city, tel FROM contact;
    UPDATE contact SET forename = 'Harry' WHERE contact = 'UK1';
    INSERT INTO contact (contact,forename) VALUES('UK1','Harry');
    create table xcont (
      section char(16) flag=1,
      keyname char(16) flag=2,
      value char(32))
    engine=CONNECT table_type=INI file_name='contact.ini'
    option_list='Layout=Row';
    SELECT * FROM xcont;
    option_list='seclen=16K';
    /***********************************************************************/
    /*  Definitions needed by the included files.                          */
    /***********************************************************************/
    #if !defined(MY_GLOBAL_H)
    #define MY_GLOBAL_H
    typedef unsigned int uint;
    typedef unsigned int uint32;
    typedef unsigned short ushort;
    typedef unsigned long ulong;
    typedef unsigned long DWORD;
    typedef char *LPSTR;
    typedef const char *LPCSTR;
    typedef int BOOL;
    #if defined(__WIN__)
    typedef void *HANDLE;
    #else
    typedef int HANDLE;
    #endif
    typedef char *PSZ;
    typedef const char *PCSZ;
    typedef unsigned char BYTE;
    typedef unsigned char uchar;
    typedef long long longlong;
    typedef unsigned long long ulonglong;
    typedef char my_bool;
    struct charset_info_st {};
    typedef const charset_info_st CHARSET_INFO;
    #define FALSE 0
    #define TRUE  1
    #define Item char
    #define MY_MAX(a,b) ((a>b)?(a):(b))
    #define MY_MIN(a,b) ((a<b)?(a):(b))
    #endif // MY_GLOBAL_H
    /************* jmgoem C++ Program Source Code File (.CPP) **************/
    /* PROGRAM NAME: jmgoem    Version 1.0                                 */
    /*  (C) Copyright to the author Olivier BERTRAND          2017         */
    /*  This program is the Java MONGO OEM module definition.              */
    /***********************************************************************/
    
    /***********************************************************************/
    /*  Definitions needed by the included files.                          */
    /***********************************************************************/
    #include "my_global.h"
    
    /***********************************************************************/
    /*  Include application header files:                                  */
    /*  global.h    is header containing all global declarations.          */
    /*  plgdbsem.h  is header containing the DB application declarations.  */
    /*  (x)table.h  is header containing the TDBASE declarations.          */
    /*  tabext.h    is header containing the TDBEXT declarations.          */
    /*  mongo.h     is header containing the MONGO declarations.           */
    /***********************************************************************/
    #include "global.h"
    #include "plgdbsem.h"
    #if defined(HAVE_JMGO)
    #include "csort.h"
    #include "javaconn.h"
    #endif   // HAVE_JMGO
    #include "xtable.h"
    #include "tabext.h"
    #include "mongo.h"
    
    /***********************************************************************/
    /*  These functions are exported from the MONGO library.         	  */
    /***********************************************************************/
    extern "C" {
      PTABDEF __stdcall GetMONGO(PGLOBAL, void*);
      PQRYRES __stdcall ColMONGO(PGLOBAL, PTOS, void*, char*, char*, bool);
    } // extern "C"
    
    /***********************************************************************/
    /*  DB static variables.                                               */
    /***********************************************************************/
    int TDB::Tnum;
    int DTVAL::Shift;
    #if defined(HAVE_JMGO)
    int    CSORT::Limit = 0;
    double CSORT::Lg2 = log(2.0);
    size_t CSORT::Cpn[1000] = {0};          /* Precalculated cmpnum values */
    #if defined(HAVE_JAVACONN)
    char *JvmPath = NULL;
    char *ClassPath = NULL;
    char *GetPluginDir(void) 
    {return "C:/mongo-java-driver/mongo-java-driver-3.4.2.jar;"
            "C:/MariaDB-10.1/MariaDB/storage/connect/";}
    char *GetJavaWrapper(void) {return (char*)"wrappers/Mongo3Interface";}
    #else   // !HAVE_JAVACONN
    HANDLE JAVAConn::LibJvm;              // Handle to the jvm DLL
    CRTJVM JAVAConn::CreateJavaVM;
    GETJVM JAVAConn::GetCreatedJavaVMs;
    #if defined(_DEBUG)
    GETDEF JAVAConn::GetDefaultJavaVMInitArgs;
    #endif  //  _DEBUG
    #endif	// !HAVE_JAVACONN
    #endif   // HAVE_JMGO
    
    /***********************************************************************/
    /*  This function returns a Mongo definition class.                    */
    /***********************************************************************/
    PTABDEF __stdcall GetMONGO(PGLOBAL g, void *memp)
    {
      return new(g, memp) MGODEF;
    } // end of GetMONGO
    
    #ifdef NOEXP
    /***********************************************************************/
    /* Functions to be defined if not exported by the CONNECT version.     */
    /***********************************************************************/
    bool IsNum(PSZ s)
    {
      for (char *p = s; *p; p++)
        if (*p == ']')
          break;
        else if (!isdigit(*p) || *p == '-')
          return false;
    
      return true;
    }	// end of IsNum
    #endif
    
    /***********************************************************************/
    /*  Return the columns definition to MariaDB.                          */
    /***********************************************************************/
    PQRYRES __stdcall ColMONGO(PGLOBAL g, PTOS tp, char *tab,
                                                   char *db, bool info)
    {
    #ifdef NOMGOCOL
      // Cannot use discovery
      strcpy(g->Message, "No discovery, MGOColumns is not accessible");
      return NULL;
    #else
      return MGOColumns(g, db, NULL, tp, info);
    #endif
    } // end of ColMONGO
    LIBRARY     MONGO
    EXPORTS
       GetMONGO     @1
       ColMONGO     @2
    [mariadb]
    ...
    innodb_purge_threads=8
    SET GLOBAL innodb_purge_threads=8;
    
    SHOW GLOBAL VARIABLES
       LIKE 'innodb_purge_threads';
    +----------------------+-------+
    | Variable_name        | Value |
    +----------------------+-------+
    | innodb_purge_threads | 8     |
    +----------------------+-------+
    [mariadb]
    ...
    innodb_purge_batch_size = 50
    SET GLOBAL innodb_max_purge_lag=1000;
    [mariadb]
    ...
    innodb_max_purge_lag = 1000
    SET GLOBAL innodb_max_purge_lag_delay=100;
    [mariadb]
    ...
    innodb_max_purge_lag_delay = 100
    SET GLOBAL innodb_purge_rseg_truncate_frequency=64;
    [mariadb]
    ...
    innodb_purge_rseg_truncate_frequency = 64
    SET GLOBAL innodb_undo_log_truncate=ON;
    [mariadb]
    ...
    innodb_undo_log_truncate = ON
    SET GLOBAL innodb_max_undo_log_size='64M';
    [mariadb]
    ...
    innodb_max_undo_log_size = 64M

    UK1

    Smith

    2003-11-08

    London

    NULL

    BER

    zipcode

    92130

    BER

    tel

    09.54.36.29.60

    BER

    cell

    06.70.06.04.16

    WEL

    name

    Schmitt

    WEL

    forename

    Bernard

    WEL

    hired

    19/02/1985

    WEL

    address

    64 tiergarten strasse

    WEL

    city

    Berlin

    WEL

    zipcode

    95013

    WEL

    tel

    03.43.377.360

    UK1

    name

    Smith

    UK1

    forename

    Henry

    UK1

    hired

    08/11/2003

    UK1

    address

    143 Blum Rd.

    UK1

    city

    London

    UK1

    zipcode

    NW1 2BP

    Quantity

    Set by innodb_purge_threads

    Configure the InnoDB Purge Threads

    InnoDB undo log
    MariaDB Enterprise Server

    Double(n,d)

    Double floating point (8 bytes)

    G

    Big integer (8 bytes)

    F or R

    Real or float (Floating point number on 4 bytes)

    X

    Use the default format field for the column type

    3400.68

    2158

    3123

    FOO

    2002-07-23

    888

    0.00

    318

    Char(n)

    Text of n characters.

    Date

    Integer (4 bytes)

    Int(n)

    Integer (4 bytes)

    Smallint(n)

    Short integer (2 bytes)

    TinyInt(n)

    Char (1 Byte)

    Bigint(n)

    Large integer (8 bytes)

    [n]{L or B or H}[n]

    n bytes binary number in little endian, big endian or host endian representation.

    C

    Characters string (n bytes)

    I

    integer (4 bytes)

    D

    Double float (8 bytes)

    S

    Short integer (2 bytes)

    T

    Tiny integer (1 byte)

    5500

    ARCHIBALD

    1980-01-25

    3789

    4380.50

    318

    123

    OLIVER

    1953-08-10

    DOS and FIX types
    2
    3
    DOS or FIX types
    CREATE TABLE
    4
    ↑
    ↑
    ↑
    ↑

    23456

    Note 1: This could not be done in only one statement if the table type had required using explicit CONNECT column options. In this case, firstly create the table, and then populate it with an Insert statement.

    Note 2: The source “plugins” table column “description” is a long text column, data type not supported for CONNECT tables. It has been silently internally replaced by varchar(256).

    This page is licensed: GPLv2

    SELECT
        plugin_name AS handler,
        plugin_version AS version,
        plugin_author AS author,
        plugin_description AS description,
        plugin_maturity AS maturity
    FROM
        information_schema.plugins
    WHERE
        plugin_type = 'STORAGE ENGINE';
    CREATE TABLE handout
    ENGINE=CONNECT
    table_type=XML
    file_name='handout.htm'
    header=yes
    option_list='name=TABLE,coltype=HTML,attribute=border=1;cellpadding=5,headattr=bgcolor=yellow'
    AS
    SELECT
        plugin_name AS handler,
        plugin_version AS version,
        plugin_author AS author,
        plugin_description AS description,
        plugin_maturity AS maturity
    FROM
        information_schema.plugins
    WHERE
        plugin_type = 'STORAGE ENGINE';
    SELECT INTO OUTFILE
    number int(5) not null,
    NNNNCCCCCCCCCCIIIISSFFFFSS
    CREATE TABLE testbal (
    fig INT(4) NOT NULL field_format='C',
    name CHAR(10) NOT NULL,
    birth DATE NOT NULL field_format='L',
    id CHAR(5) NOT NULL field_format='L2',
    salary DOUBLE(9,2) NOT NULL DEFAULT 0.00 field_format='F',
    dept INT(4) NOT NULL field_format='L2')
    ENGINE=CONNECT table_type=BIN block_size=5 file_name='Testbal.dat';
    INSERT INTO testbal VALUES
      (5500,'ARCHIBALD','1980-01-25','3789',4380.50,318),
      (123,'OLIVER','1953-08-10','23456',3400.68,2158),
      (3123,'FOO','2002-07-23','888',default,318);
    SELECT * FROM testbal;

    Note: If not specified, the column definitions are retrieved from the first table of the table list.

    The default database of the sub-tables is the current database or if not, can be specified in the DBNAME option. For the tables that are not in the default database, this can be specified in the table list. For instance, to create a table based on the French table employe in the current database and on the English table employee of the db2 database, the syntax of the create statement can be:

    The search for columns in sub tables is done by name and, if they exist with a different name, by their position given by a not null FLAG option. Column sex exists only in the English table (FLAG is 0). Its values will null value for the French table.

    For instance, the query:

    Can reply:

    NAME
    SEX
    TITLE
    SALARY

    BARBOUD

    NULL

    VENDEUR

    9700.00

    MARCHANT

    NULL

    VENDEUR

    8800.00

    MINIARD

    NULL

    ADMINISTRATIF

    The first 9 rows, coming from the French table, have a null for the sex value. They would have 0 if the sex column had been created NOT NULL.

    Sub-tables of not CONNECT engines

    Sub-tables are accessed as PROXY tables. For not CONNECT sub-tables that are accessed via the MySQL API, it is possible like with PROXY to change the MYSQL default options. Of course, this will apply to all not CONNECT tables of the list.

    Using the TABID special column

    The TABID special column can be used to see from which table the rows come from and to restrict the access to only some of the sub-tables.

    Let us see the following example where t1 and t2 are MyISAM tables similar to the ones given in the MERGE description:

    The result returned by the SELECT statement is:

    tabname
    a
    message

    xt1

    1

    Testing

    xt1

    2

    table

    xt1

    3

    t1

    xt2

    1

    Testing

    Now if you send the query:

    CONNECT will analyze the where clause and only read the xt1 table. This can save time if you want to retrieve only a few sub-tables from a TBL table containing many sub-tables.

    Parallel Execution

    Parallel Execution is currently unavailable until some bugs are fixed.

    When the sub-tables are located on different servers, it is possible to execute the remote queries simultaneously instead of sequentially. To enable this, set the thread option to yes.

    Additional options available for this table type:

    Option
    Description

    Maxerr

    The max number of missing tables in the table list before an error is raised. Defaults to 0.

    Accept

    If true, missing columns are accepted and return null values. Defaults to false.

    Thread

    If true, enables parallel execution of remote sub-tables.

    These options can be specified in the OPTION_LIST.

    This page is licensed: GPLv2

    MERGE
    The InnoDB storage engine:
    • Is available with all versions of and MariaDB Community Server.

    • Is a general purpose storage engine.

    • Is transactional and well-suited for online transactional processing (OLTP) workloads.

    • Is ACID-compliant.

    • Performs well for mixed read-write workloads.

    • Supports online DDL.

    Feature Summary

    Feature
    Detail
    Resources

    Storage Engine

    InnoDB

    Availability

    All ES and CS versions

    Workload Optimization

    Transactional

    Table Orientation

    Row

    Examples

    Creating an InnoDB Table

    Resources

    Architecture

    • Background Thread Pool

    • Buffer Pool

    • I/O Threads

    • Purge Threads

    Operations

    • Configure Page Compression

    • Configure the Buffer Pool

    • Configure the I/O Threads

    • Configure the Purge Threads

    MariaDB documentation

    • InnoDB

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    rabbit
    bird
    fish

    John

    2

    0

    0

    0

    0

    Bill

    0

    1

    0

    0

    0

    Mary

    1

    1

    0

    0

    We can create an occur table by:

    When displaying it by

    We will get the result:

    name
    race
    number

    John

    dog

    2

    Bill

    cat

    1

    Mary

    dog

    1

    Mary

    cat

    1

    First of all, the values of the column listed in the Colist option have been put in a unique column whose name is given by the OccurCol option. When several columns have non null (or pseudo-null) values, several rows are generated, with the other normal columns values repeated.

    In addition, an optional special column was added whose name is given by the RankCol option. This column contains the name of the source column from which the value of the OccurCol column comes from. It permits here to know the race of the pets whose number is given in number.

    This table type permit to make queries that would be more complicated to make on the original tables. For instance to know who as more than 1 pet of a kind, you can simply ask:

    You will get the result:

    name
    race
    number

    John

    dog

    2

    Lisbeth

    rabbit

    2

    Kevin

    cat

    2

    Kevin

    bird

    6

    Note 1: Like for XCOL tables, no row multiplication for queries not implying the Occur column.

    Note 2: Because the OccurCol was declared "not null" no rows were generated for null or pseudo-null values of the column list. If the OccurCol is declared as nullable, rows are also generated for columns containing null or pseudo-null values.

    Occur tables can be also defined from views or source definition. Also, CONNECT is able to generate the column definitions if not specified:

    This table is displayed as:

    name
    month
    day

    Foo

    january

    8

    Foo

    february

    7

    Foo

    march

    2

    Foo

    april

    1

    This page is licensed: CC BY-SA / Gnu FDL

    XCOL
    PROXY

    However, one limitation of the TBL type regarding MERGE is that TBL tables are currently read-only; INSERT is not supported by TBL. Also, keep using MERGE to access a list of identical MyISAM tables because it are faster, not passing by the MySQL API.

    FEDERATED(X): The main limitation of FEDERATED is to access only MySQL/MariaDB tables. The MYSQL table type of CONNECT has the same limitation but CONNECT provides the ODBC table type and JDBC table type that can access tables of any RDBS providing an ODBC or JDBC driver (including MySQL even it is not really useful!)

    Another major limitation of FEDERATED is to access only one table. By combining TBL and MYSQL tables, CONNECT enables to access a collection of local or remote tables as one table. Of course the sub-tables can be on different servers. With one SELECT statement, a company manager are able to interrogate results coming from all of his subsidiary computers. This is great for distribution, banking, and many other industries.

    Remotely executing complex queries

    Many companies or administrations must deal with distributed information. CONNECT enables to deal with it efficiently without having to copy it to a centralized database. Let us suppose we have on some remote network machines_m1, m2, … mn_ some information contained in two tables t1 and t2.

    Suppose we want to execute on all servers a query such as:

    This raises many problems. Returning the column values of the t1 and t2 tables from all servers can be a lot of network traffic. The group by on the possibly huge resulting tables can be a long process. In addition, the join on the t1 and t2 tables may be relevant only if the joined tuples belong to the same machine, obliging to add a condition on an additional tabid or servid special column.

    All this can be avoided and optimized by forcing the query to be locally executed on each server and retrieving only the small results of the group by queries. Here is how to do it. For each remote machine, create a table that will retrieve the locally executed query. For instance for m1:

    Note the alias for the functional column. An alias would be required for the c1 column if its name was different on some machines. The t1 and t2 table names can also be eventually different on the remote machines. The true names must be used in the SRCDEF parameter. This will create a set of tables with two columns named c1 and sc2[1].

    Then create the table that will retrieve the result of all these tables:

    Now you can retrieve the desired result by:

    Almost all the work are done on the remote machines, simultaneously thanks to the thread option, making this query super-fast even on big tables placed on many remote machines.

    Thread is currently experimental. Use it only for test and report any malfunction on .

    Providing a list of servers

    An interesting case is when the query to run on remote machines is the same for all of them. It is then possible to avoid declaring all sub-tables. In this case, the table list option are used to specify the list of servers theSRCDEF query must be sent. This is a list of URL’s and/or Federated server names.

    For instance, supposing that federated servers srv1, srv2, … srv_n_ were created for all remote servers, it are possible to create a tbl table allowing getting the result of a query executed on all of them by:

    For instance:

    This reply:

    @@version

    10.0.3-MariaDB-debug

    10.0.2-MariaDB

    Here the server list specifies a void server corresponding to the local running MariaDB and a federated server named server_one.

    1. ↑ To generate the columns from the SRCDEF query, CONNECT must execute it. This will make sure it is ok. However, if the remote server is not connected yet, or the remote table not existing yet, you can alternatively specify the columns in the create table statement.

    This page is licensed: GPLv2

    FEDERATED
    MERGE
    MyISAM
    TBL
    MYSQL
    The command:

    will return:

    linenum
    name
    city
    birth
    hired
    agehired
    fn

    1

    John

    Boston

    1986-01-25

    2010-06-02

    24

    d:\mariadb\sql\data\boys.txt

    2

    Existing special columns are listed in the following table:

    Special Name
    Type
    Description of the column value

    ROWID

    Integer

    The row ordinal number in the table. This is not quite equivalent to a virtual column with an auto increment of 1 because rows are renumbered when deleting rows.

    ROWNUM

    Integer

    The row ordinal number in the file. This is different from ROWID for multiple tables, TBL/XCOL/OCCUR/PIVOT tables, XML tables with a multiple column, and for DBF tables where ROWNUM includes soft deleted rows.

    FILEID FDISK FPATH FNAME FTYPE

    String

    FILEID returns the full name of the file this row belongs to. Useful in particular for multiple tables represented by several files. The other special columns can be used to retrieve only one part of the full name.

    TABID

    String

    Note: CONNECT does not currently support auto incremented columns. However, a ROWID special column will do the job of a column auto incremented by 1.

    This page is licensed: GPLv2

    virtual and persistent columns

    CONNECT DOS and FIX Table Types

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Overview

    Tables of type DOS and FIX are based on text files (see CONNECT Table Types - Data Files). Within a record, column fields are positioned at a fixed offset from the beginning of the record. Except sometimes for the last field, column fields are also of fixed length. If the last field has varying length, the type of the table is DOS. For instance, having the file dept.dat formatted like:

    You can define a table based on it with:

    Here the flag column option represents the offset of this column inside the records. If the offset of a column is not specified, it defaults to the end of the previous column and defaults to 0 for the first one. The lrecl parameter that represents the maximum size of a record is calculated by default as the end of the rightmost column and can be unspecified except when some trailing information exists after the rightmost column.

    Note: A special case is files having an encoding such as UTF-8 (for instance specifying charset=UTF8) in which some characters may be represented with several bytes. Unlike the type size that MariaDB interprets as a number of characters, the lrecl value is the record size in bytes and the flag value represents the offset of the field in the record in bytes. If the flag and/or the lrecl value are not specified, they are calculated by the number of characters in the fields multiplied by a value that is the maximum size in bytes of a character for the corresponding charset. For UTF-8 this value is 3 which is often far too much as there are very few characters requiring 3 bytes to be represented. When creating a new file, you are on the safe side by only doubling the maximum number of characters of a field to calculate the offset of the next field. Of course, for already existing files, the offset must be specified according to what it is in it.

    Although the field representation is always text in the table file, you can freely choose the corresponding column type, characters, date, integer or floating point according to its contents.

    Sometimes, as in the number column of the above department table, you have the choice of the type, numeric or characters. This will modify how the column is internally handled — in characters 0021 is different from 21 but not in numeric — as well as how it is displayed.

    If the last field has fixed length, the table should be referred as having the type FIX. For instance, to create a table on the file boys.txt:

    You can for instance use the command:

    Here some flag options were not specified because the fields have no intermediate space between them except for the last column. The offsets are calculated by default adding the field length to the offset of the preceding field. However, for formatted date columns, the offset in the file depends on the format and cannot be calculated by default. For fixed files, the lrecl option is the physical length of the record including the line ending character(s). It is calculated by adding to the end of the last field 2 bytes under Windows (CRLF) or 1 byte under UNIX. If the file is imported from another operating system, the ENDING option will have to be specified with the proper value.

    For this table, the last offset and the record length must be specified anyway because the date columns have field length coming from their format that is not known by CONNECT. Do not forget to add the line ending length to the total length of the fields.

    This table is displayed as:

    name
    city
    birth
    hired

    Whenever possible, the fixed format should be preferred to the varying one because it is much faster to deal with fixed tables than with variable tables. Sure enough, instead of being read or written record by record, FIX tables are processed by blocks of BLOCK_SIZE records, resulting in far less input/output operations to execute. The block size defaults to 100 if not specified in the Create Table statement.

    Note 1: It is not mandatory to declare in the table all the fields existing in the source file. However, if some fields are ignored, the flag option of the following field and/or the lrecl option will have to be specified.

    Note 2: Some files have an EOF marker (CTRL+Z 1A) that can prevent the table to be recognized as fixed because the file length is not a multiple of the fixed record size. To indicate this, use in the option list the create option EOF. For instance, if after creating the FIX table xtab on the file foo.dat that you know have fixed record size, you get, when you try to use it, a message such as:

    After checking that the LRECL default or specified specification is correct, you can indicate to ignore that extra EOF character by:

    Of course, you can specify this option directly in the Create statement. All this applies to some other table types, in particular to BIN tables.

    Note 3: The width of the fields is the length specified in the column declaration. For instance for a column declared as:

    The field width in the file is 3 characters. This is the value used to calculate the offset of the next field if it is not specified. If this length is not specified, it defaults to the MySQL default type length.

    Specifying the Field Format

    Some files have specific format for their numeric fields. For instance, the decimal point is absent and/or the field should be filled with leading zeros. To deal with such files, as well in reading as in writing, the format can be specified in the CREATE TABLE column definition. The syntax of the field format specification is:

    The optional parts of the format are:

    Example

    Let us see how it works in the following example. We define a table based on the file xfmt.txt having eight fields of 12 characters:

    The first row is displayed as:

    COL1
    COL2
    COL3
    COL4
    COL5
    COL6
    COL7
    COL8

    The number of decimals displayed for all float columns is the column precision, the second argument of the column type option. Of course, integer columns have no decimals, although their formats specify some.

    More interesting is the file layout. To see it let us define another table based on the same file but whose columns are all characters:

    The (transposed) display of the select command shows the file text layout for each field. Below a third column was added in this document to comment this result.

    Column
    Row 1
    Comment (all numeric fields are written right justified)

    Note: For columns internally using double precision floating-point numbers, MariaDB limits the decimal precision of any calculation to the column precision. The declared column precision should be at least the number of decimals of the format to avoid a loss of decimals as it happened for col3 of the above example.

    This page is licensed: CC BY-SA / Gnu FDL

    InnoDB Redo Log

    The redo log is a disk-based transaction log used during crash recovery to replay incomplete transactions and ensure data durability.

    Directly editing or moving the redo logs can cause corruption, and should never normally be attempted.

    Overview

    The InnoDB storage engine in MariaDB Enterprise Server employs a Redo Log to ensure data is written to disk reliably, even in the event of a crash. This is achieved by the Redo Log acting as a transaction log.

    Redo Log records are uniquely identified by a Log Sequence Number (LSN). The Redo Log consists of circular log files of a fixed size, where older records are routinely overwritten by newer ones.

    InnoDB periodically performs checkpoints. During a checkpoint operation, InnoDB writes (flushes) the Redo Log records to the InnoDB tablespace files.

    In the event of a server crash, InnoDB utilizes the Redo Log for crash recovery during the subsequent server startup. It locates the last checkpoint within the Redo Log and then replays the Redo Log records generated since that checkpoint, effectively flushing these pending changes to the InnoDB tablespace files.

    The redo log files are typically named ib_logfileN, where N is an integer. However, starting with , there is a single redo log file, consistently named ib_logfile0. The location of these redo log files is determined by the innodb_log_group_home_dir system variable, if configured. If this variable is not set, the redo log files are created in the directory specified by the datadir system variable. The redo log plays a crucial role during crash recovery and in the background process of flushing transactions to the tablespaces.

    Feature Summary

    Feature
    Detail
    Resources

    Basic Configuration

    Flushing Effects on Performance and Consistency

    The system variable determines how often the transactions are flushed to the redo log, and it is important to achieve a good balance between speed and reliability.

    Binary Log Group Commit and Redo Log Flushing

    When both (the default) is set and the is enabled, there is one less sync to disk inside InnoDB during commit (2 syncs shared between a group of transactions instead of 3). See for more information.

    Redo Log Group Capacity

    The redo log group capacity is the total combined size of all InnoDB redo logs. The relevant factors are:

    • From , there is 1 redo log.

    • The size of each redo log file is configured by the system variable. This can safely be set to a much higher value from . Before , resizing required the server to be restarted. From the variable can be set dynamically.

    The redo log group capacity is determined by the following calculation:

    innodb_log_group_capacity = *

    For example, if is set to 2G and is set to 2, then we would have the following:

    • innodb_log_group_capacity = *

    • = 2G * 2

    • = 4G

    Changing the Redo Log Group Capacity

    MariaDB starting with

    The number of redo log files is fixed.

    The size of redo log files can be changed with the following process:

    • Stop the server.

    • To change the log file size, configure .

    • Start the server.

    Log Sequence Number (LSN)

    Records within the InnoDB redo log are identified via a log sequence number (LSN).

    Checkpoints

    When InnoDB performs a checkpoint, it writes the LSN of the oldest dirty page in the to the InnoDB redo log. If a page is the oldest dirty page in the , then that means that all pages with lower LSNs have been flushed to the physical InnoDB tablespace files. If the server were to crash, then InnoDB would perform crash recovery by only applying log records with LSNs that are greater than or equal to the LSN of the oldest dirty page written in the last checkpoint.

    Checkpoints are one of the tasks performed by the InnoDB master background thread. This thread schedules checkpoints 7 seconds apart when the server is very active, but checkpoints can happen more frequently when the server is less active.

    Dirty pages are not actually flushed from the buffer pool to the physical InnoDB tablespace files during a checkpoint. That process happens asynchronously on a continuous basis by InnoDB's write I/O background threads configured by the system variable. If you want to make this process more aggressive, then you can decrease the value of the system variable. You may also need to better tune InnoDB's I/O capacity on your system by setting the system variable.

    Determining the Checkpoint Age

    The checkpoint age is the amount of data written to the InnoDB redo log since the last checkpoint.

    Determining the Checkpoint Age in InnoDB

    MariaDB starting with

    reintroduced the status variable for determining the checkpoint age.

    The checkpoint age can also be determined by the process shown below.

    To determine the InnoDB checkpoint age, do the following:

    • Query .

    • Find the LOG section:

    • Perform the following calculation:

    innodb_checkpoint_age = Log sequence number - Last checkpoint at

    In the example above, that would be:

    • innodb_checkpoint_age = Log sequence number - Last checkpoint at

    • = 252794398789379 - 252792767756840

    • = 1631032539 bytes

    Determining the Redo Log Occupancy

    The redo log occupancy is the percentage of the InnoDB redo log capacity that is taken up by dirty pages that have not yet been flushed to the physical InnoDB tablespace files in a checkpoint. Therefore, it's determined by the following calculation:

    innodb_log_occupancy = /

    For example, if is 1.5G and is 4G, then we would have the following:

    • innodb_log_occupancy = /

    • = 1.5G / 4G

    • = 0.375

    If the calculated value for redo log occupancy is too close to 1.0, then the InnoDB redo log capacity may be too small for the current workload.

    Updates

    A number of redo log improvements were made in :

    • Autosize innodb_buffer_pool_chunk_size ().

    • Improve the redo log for concurrency ().

    • Remove FIL_PAGE_FILE_FLUSH_LSN ().

    Before , created a zero-length ib_logfile0 as a dummy placeholder. From (), the size of that dummy file was increased to 12304 (0x3010) bytes, and all updates of FIL_PAGE_FILE_FLUSH_LSN in the first page of the system tablespace are removed.

    From , if the server is started up with a zero-sized ib_logfile0, it is assumed that an upgrade is being performed after a backup had been prepared. The start LSN will then be read from FIL_PAGE_FILE_FLUSH_LSN, and a new log file are created starting from exactly that LSN.

    Manually creating a zero-sized ib_logfile0 without manually updating the FIL_PAGE_FILE_FLUSH_LSN in the system tablespace to a recent enough LSN may result in error messages such as "page LSN is in the future". If a log was discarded while some changes had already been written to data pages, all sort of corruption may occur.

    If the database was initialized with a server that never updates the FIL_PAGE_FILE_FLUSH_LSN field, then any server startup attempts with a zero-size ib_logfile0 are refused because of an invalid LSN. If that field was ever updated with a valid LSN by an older server, this safety mechanism cannot work, and the server may "rewind" to an earlier LSN.

    This page is licensed: CC BY-SA / Gnu FDL

    CONNECT XCOL Table Type

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    XCOL tables are based on another table or view, like tables. This type can be used when the object table has a column that contains a list of values.

    Suppose we have a 'children' table that can be displayed as:

    name
    childlist

    InnoDB Versions

    This page is outdated. It's left in place because release notes for old MariaDB versions refer to it (MariaDB < 10.3).

    This page is outdated. It's left in place because release notes for old MariaDB versions refer to it (MariaDB < 10.3).

    The InnoDB implementation has diverged substantially from the InnoDB in MySQL. Therefore, in these versions, the InnoDB version is no longer associated with a MySQL release version.

    The default InnoDB implementation is based on InnoDB from MySQL 5.7. See for more information.

    CREATE TABLE allemp (
      SERIALNO char(5) NOT NULL flag=1,
      NAME varchar(12) NOT NULL flag=2,
      SEX smallint(1),
      TITLE varchar(15) NOT NULL flag=3,
      MANAGER char(5) DEFAULT NULL flag=4,
      DEPARTMENT char(4) NOT NULL flag=5,
      SECRETARY char(5) DEFAULT NULL flag=6,
      SALARY double(8,2) NOT NULL flag=7)
    ENGINE=CONNECT table_type=TBL
    table_list='employe,db2.employee' option_list='Accept=1';
    SELECT name, sex, title, salary FROM allemp WHERE department = 318;
    CREATE TABLE xt1 (
      a INT(11) NOT NULL,
      message CHAR(20))
    ENGINE=CONNECT table_type=MYSQL tabname='t1'
    option_list='database=test,user=root';
    
    CREATE TABLE xt2 (
      a INT(11) NOT NULL,
      message CHAR(20))
    ENGINE=CONNECT table_type=MYSQL tabname='t2'
    option_list='database=test,user=root';
    
    CREATE TABLE toto (
      tabname CHAR(8) NOT NULL special='TABID',
      a INT(11) NOT NULL,
      message CHAR(20))
    ENGINE=CONNECT table_type=TBL table_list='xt1,xt2';
    
    SELECT * FROM total;
    SELECT * FROM total WHERE tabname = 'xt2';
    CREATE DATABASE hq_sales;
    CREATE TABLE hq_sales.invoices (
       invoice_id BIGINT UNSIGNED AUTO_INCREMENT NOT NULL,
       branch_id INT NOT NULL,
       customer_id INT,
       invoice_date DATETIME(6),
       invoice_total DECIMAL(13, 2),
       payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
       PRIMARY KEY(invoice_id)
    ) ENGINE = InnoDB;
    SELECT TABLE_SCHEMA, TABLE_NAME, ENGINE
    FROM information_schema.TABLES
    WHERE TABLE_SCHEMA='hq_sales'
    AND TABLE_NAME='invoices';
    +--------------+------------+--------+
    | TABLE_SCHEMA | TABLE_NAME | ENGINE |
    +--------------+------------+--------+
    | hq_sales     | invoices   | InnoDB |
    +--------------+------------+--------+
    CREATE TABLE xpet (
      name VARCHAR(12) NOT NULL,
      race CHAR(6) NOT NULL,
      NUMBER INT NOT NULL)
    ENGINE=CONNECT table_type=occur tabname=pets
    option_list='OccurCol=number,RankCol=race'
    Colist='dog,cat,rabbit,bird,fish';
    SELECT * FROM xpet;
    SELECT * FROM xpet WHERE NUMBER > 1;
    CREATE TABLE ocsrc ENGINE=CONNECT table_type=occur
    colist='january,february,march,april,may,june,july,august,september,
    october,november,december' option_list='rankcol=month,occurcol=day'
    srcdef='select ''Foo'' name, 8 january, 7 february, 2 march, 1 april,
      8 may, 14 june, 25 july, 10 august, 13 september, 22 october, 28
      november, 14 december';
    SELECT c1, SUM(c2) FROM t1 a, t2 b WHERE a.id = b.id GROUP BY c1;
    CREATE TABLE rt1 ENGINE=CONNECT option_list='host=m1'
    srcdef='select c1, sum(c2) as sc2 from t1 a, t2 b where a.id = b.id group by c1';
    CREATE TABLE rtall ENGINE=CONNECT table_type=tbl
    table_list='rt1,rt2,…,rtn' option_list='thread=yes';
    SELECT c1, SUM(sc2) FROM rtall;
    CREATE TABLE qall [column definition]
    ENGINE=CONNECT table_type=TBL srcdef='a query'
    table_list='srv1,srv2,…,srvn' [option_list='thread=yes'];
    CREATE TABLE verall ENGINE=CONNECT table_type=TBL srcdef='select @@version' table_list=',server_one';
    SELECT * FROM verall;
    CREATE TABLE boys (
      linenum INT(6) NOT NULL DEFAULT 0 special=ROWID,
      name CHAR(12) NOT NULL,
      city CHAR(12) NOT NULL,
      birth DATE NOT NULL date_format='DD/MM/YYYY',
      hired DATE NOT NULL date_format='DD/MM/YYYY' flag=36,
      agehired INT(3) AS (floor(datediff(hired,birth)/365.25))
      virtual,
      fn CHAR(100) NOT NULL DEFAULT '' special=FILEID)
    ENGINE=CONNECT table_type=FIX file_name='boys.txt' mapped=YES lrecl=47;
    SELECT * FROM boys WHERE city = 'boston';

    7500.00

    POUPIN

    NULL

    INGENIEUR

    7450.00

    ANTERPE

    NULL

    INGENIEUR

    6850.00

    LOULOUTE

    NULL

    SECRETAIRE

    4900.00

    TARTINE

    NULL

    OPERATRICE

    2800.00

    WERTHER

    NULL

    DIRECTEUR

    14500.00

    VOITURIN

    NULL

    VENDEUR

    10130.00

    BANCROFT

    2

    SALESMAN

    9600.00

    MERCHANT

    1

    SALESMAN

    8700.00

    SHRINKY

    2

    ADMINISTRATOR

    7500.00

    WALTER

    1

    ENGINEER

    7400.00

    TONGHO

    1

    ENGINEER

    6800.00

    HONEY

    2

    SECRETARY

    4900.00

    PLUMHEAD

    2

    TYPIST

    2800.00

    WERTHER

    1

    DIRECTOR

    14500.00

    WHEELFOR

    1

    SALESMAN

    10030.00

    xt2

    2

    table

    xt2

    3

    t2

    0

    Lisbeth

    0

    0

    2

    0

    0

    Kevin

    0

    2

    0

    6

    0

    Donald

    1

    0

    0

    0

    3

    Lisbeth

    rabbit

    2

    Kevin

    cat

    2

    Kevin

    bird

    6

    Donald

    dog

    1

    Donald

    fish

    3

    Donald

    fish

    3

    Foo

    may

    8

    Foo

    june

    14

    Foo

    july

    25

    Foo

    august

    10

    Foo

    september

    13

    Foo

    october

    22

    Foo

    november

    28

    Foo

    december

    14

    Henry

    Boston

    1987-06-07

    2008-04-01

    20

    d:\mariadb\sql\data\boys.txt

    6

    Bill

    Boston

    1986-09-11

    2008-02-10

    21

    d:\mariadb\sql\data\boys.txt

    The name of the table this row belongs to. Useful for TBL tables.

    PARTID

    String

    The name of the partition this row belongs to. Specific to partitioned tables.

    SERVID

    String

    The name of the federated server or server host used by a MYSQL table. “ODBC” for an ODBC table, "JDBC" for a JDBC table and “Current” for all other tables.

    Sam

    Chicago

    1979-11-22

    2007-10-10

    James

    Dallas

    1992-05-13

    2009-12-14

    Bill

    Boston

    1986-09-11

    2008-02-10

    COL5

    -0023456.800

    Z3 → (Minus sign) leading zeros, 3 decimals.

    COL6

    000000314159

    ZN5 → Leading zeros, no decimal point, 5 decimals.

    COL7

    4567000

    N3 → No decimal point. The last 3 digits are decimals.

    COL8

    4567000

    Same. Any decimals would be ignored.

    John

    Boston

    1986-01-25

    2010-06-02

    Henry

    Boston

    1987-06-07

    2008-04-01

    George

    San Jose

    1981-08-10

    Z

    The field has leading zeros

    N

    No decimal point exist in the file

    d

    The number of decimals, defaults to the column precision

    4567.056

    4567.056

    4567.06

    4567.056

    -23456.800

    3.14159

    4567

    4567

    COL1

    4567.056

    No format, the value was entered as is.

    COL2

    4567.0560

    The format ‘4’ forces to write 4 decimals.

    COL3

    4567060

    N3 → No decimal point. The last 3 digits are decimals. However, the second decimal was rounded because of the column precision.

    COL4

    00004567.056

    2010-06-02

    Z → Leading zeros, 3 decimals (the column precision)

    0318 KINGSTON       70012 SALES       Bank/Insurance
    0021 ARMONK         87777 CHQ         Corporate headquarter
    0319 HARRISON       40567 SALES       Federal Administration
    2452 POUGHKEEPSIE   31416 DEVELOPMENT Research & development
    CREATE TABLE department (
      NUMBER CHAR(4) NOT NULL,
      LOCATION CHAR(15) NOT NULL flag=5,
      director CHAR(5) NOT NULL flag=20,
      FUNCTION CHAR(12) NOT NULL flag=26,
      name CHAR(22) NOT NULL flag=38)
    ENGINE=CONNECT table_type=DOS file_name='dept.dat';
    John      Boston      25/01/1986  02/06/2010
    Henry     Boston      07/06/1987  01/04/2008
    George    San Jose    10/08/1981  02/06/2010
    Sam       Chicago     22/11/1979  10/10/2007
    James     Dallas      13/05/1992  14/12/2009
    Bill      Boston      11/09/1986  10/02/2008
    CREATE TABLE boys (
      name CHAR(12) NOT NULL,
      city CHAR(12) NOT NULL,
      birth DATE NOT NULL date_format='DD/MM/YYYY',
      hired DATE NOT NULL date_format='DD/MM/YYYY' flag=36)
    ENGINE=CONNECT table_type=FIX file_name='boys.txt' lrecl=48;
    File foo.dat is not fixed length, len=302587 lrecl=141
    ALTER TABLE xtab option_list='eof=1';
    number int(3) not null,
    Field_format='[Z][N][d]'
    CREATE TABLE xfmt (
      col1 DOUBLE(12,3) NOT NULL,
      col2 DOUBLE(12,3) NOT NULL field_format='4',
      col3 DOUBLE(12,2) NOT NULL field_format='N3',
      col4 DOUBLE(12,3) NOT NULL field_format='Z',
      col5 DOUBLE(12,3) NOT NULL field_format='Z3',
      col6 DOUBLE(12,5) NOT NULL field_format='ZN5',
      col7 INT(12) NOT NULL field_format='N3',
      col8 SMALLINT(12) NOT NULL field_format='N3')
    ENGINE=CONNECT table_type=FIX file_name='xfmt.txt';
    
    INSERT INTO xfmt VALUES(4567.056,4567.056,4567.056,4567.056,-23456.8,
        3.14159,4567,4567);
    SELECT * FROM xfmt;
    CREATE TABLE cfmt (
      col1 CHAR(12) NOT NULL,
      col2 CHAR(12) NOT NULL,
      col3 CHAR(12) NOT NULL,
      col4 CHAR(12) NOT NULL,
      col5 CHAR(12) NOT NULL,
      col6 CHAR(12) NOT NULL,
      col7 CHAR(12) NOT NULL,
      col8 CHAR(12) NOT NULL)
    ENGINE=CONNECT table_type=FIX file_name='xfmt.txt';
    SELECT * FROM cfmt;

    Location

    Set by (Defaults to )

    Quantity

    1 (ES 10.5+, CS 10.5+)

    Size

    Set by (default varies)

    = 1631032539 byes / (1024 * 1024 * 1024) (GB/bytes)
  • = 1.5 GB of redo log written since last checkpoint

  • Transaction Log

    InnoDB Redo Log

    Storage Engine

    InnoDB

    Purpose

    Crash Safety

    Availability

    All ES and CS versions

    MariaDB Enterprise Server

    innodb_flush_log_at_trx_commit
    innodb_flush_log_at_trx_commit=1
    binary log
    Binary Log Group Commit and InnoDB Flushing Performance
    innodb_log_file_size
    innodb_log_file_size
    innodb_log_files_in_group
    innodb_log_file_size
    innodb_log_files_in_group
    innodb_log_file_size
    innodb_log_files_in_group
    innodb_log_file_size
    InnoDB buffer pool
    InnoDB buffer pool
    innodb_write_io_threads
    innodb_max_dirty_pages_pct
    innodb_io_capacity
    Innodb_checkpoint_age
    SHOW ENGINE INNODB STATUS
    innodb_checkpoint_age
    innodb_log_group_capacity
    innodb_checkpoint_age
    innodb_log_group_capacity
    innodb_checkpoint_age
    innodb_log_group_capacity
    MDEV-25342
    MDEV-14425
    MDEV-27199
    mariadb-backup --prepare
    MDEV-14425

    Sophie

    Vivian, Antony

    Lisbeth

    Lucy,Charles,Diana

    Corinne

    Claude

    Marc

    Janet

    Arthur, Sandra, Peter, John

    We can have a different view on these data, where each child are associated with his/her mother by creating an XCOL table by:

    The COLNAME option specifies the name of the column receiving the list items. This will return from:

    The requested view:

    mother
    child

    Sophia

    Vivian

    Sophia

    Antony

    Lisbeth

    Lucy

    Lisbeth

    Charles

    Lisbeth

    Diana

    Corinne

    NULL

    Several things should be noted here:

    • When the original children field is void, what happens depends on the NULL specification of the "multiple" column. If it is nullable, like here, a void string will generate a NULL value. However, if the column is not nullable, no row are generated at all.

    • Blanks after the separator are ignored.

    • No copy of the original data was done. Both tables use the same source data.

    • Specifying the column definitions in the CREATE TABLE statement is optional.

    The "multiple" column child can be used as any other column. For instance:

    This will return:

    Mother
    Child

    Sophia

    Antony

    Janet

    Arthur

    If a query does not involve the "multiple" column, no row multiplication will be done. For instance:

    This will just return all the mothers:

    mother

    Sophia

    Lisbeth

    Corinne

    Claude

    Janet

    The same occurs with other types of select statements, for instance:

    Grouping also gives different result:

    Replies:

    mother
    count(*)

    Claude

    1

    Corinne

    1

    Janet

    1

    Lisbeth

    1

    Sophia

    1

    While the query:

    Gives the more interesting result:

    mother
    count(child)

    Claude

    1

    Corinne

    0

    Janet

    4

    Lisbeth

    3

    Sophia

    2

    Some more options are available for this table type:

    Option
    Description

    Sep_char

    The separator character used in the "multiple" column, defaults to the comma.

    Mult

    Indicates the max number of multiple items. It is used to internally calculate the max size of the table and defaults to 10. (To be specified in OPTION_LIST).

    Using Special Columns with XCOL

    Special columns can be used in XCOL tables. The mostly useful one is ROWNUM that gives the rank of the value in the list of values. For instance:

    This table are displayed as:

    rank
    mother
    child

    1

    Sophia

    Vivian

    2

    Sophia

    Antony

    1

    Lisbeth

    Lucy

    2

    Lisbeth

    Charles

    To list only the first child of each mother you can do:

    returning:

    mother
    child

    Sophia

    Vivian

    Lisbeth

    Lucy

    Claude

    Marc

    Janet

    Arthur

    However, note the following pitfall: trying to get the names of all mothers having more than 2 children cannot be done by:

    This is because with no row multiplication being done, the rank value is always 1. The correct way to obtain this result is longer but cannot use the ROWNUM column:

    XCOL tables based on specified views

    Instead of specifying a source table name via the TABNAME option, it is possible to retrieve data from a “view” whose definition is given in a new option SRCDEF . For instance:

    Then, for instance:

    This will display something like:

    value

    index_merge=on

    index_merge_union=on

    index_merge_sort_union=on

    index_merge_intersection=on

    index_merge_sort_intersection=off

    engine_condition_pushdown=off

    index_condition_pushdown=on

    derived_merge=on

    derived_with_keys=on

    firstmatch=on

    Note: All XCOL tables are read only.

    This page is licensed: CC BY-SA / Gnu FDL

    PROXY
    Note

    XtraDB is a performance enhanced fork of InnoDB. For compatibility reasons, the system variables still retain their original innodb prefixes. If the documentation says that something applies to InnoDB, then it usually also applies to the XtraDB fork, unless explicitly stated otherwise. In these versions, it is still possible to use InnoDB instead of XtraDB. See Using InnoDB instead of XtraDB for more information.

    Divergences

    Some examples of divergences between MariaDB's InnoDB and MySQL's InnoDB are:

    • (which is based on MySQL 5.6) included encryption and variable-size page compression before MySQL 5.7 introduced them.

    • (based on MySQL 5.7) introduced persistent AUTO_INCREMENT (MDEV-6076) in a GA release before MySQL 8.0.

    • (based on MySQL 5.7) introduced instant ADD COLUMN (MDEV-11369) before MySQL.

    InnoDB Versions Included in MariaDB Releases

    InnoDB Version
    Introduced

    No longer reported

    InnoDB 5.7.20

    InnoDB 5.7.19

    InnoDB 5.7.14

    InnoDB Version
    Introduced

    InnoDB 5.7.29

    InnoDB 5.7.23

    InnoDB 5.7.22

    InnoDB 5.7.21

    InnoDB 5.7.20

    InnoDB 5.7.19

    InnoDB Version
    Introduced

    InnoDB 5.6.49

    InnoDB 5.6.47

    InnoDB 5.6.44

    InnoDB 5.6.42

    InnoDB 5.6.39

    InnoDB 5.6.37

    InnoDB Version
    Introduced

    InnoDB 5.6.43

    InnoDB 5.6.42

    InnoDB 5.6.40

    InnoDB 5.6.39

    InnoDB 5.6.38

    InnoDB 5.6.37

    See Also

    • Why MariaDB uses InnoDB instead of XtraDB from MariaDB 10.2

    • XtraDB Versions

    This page is licensed: CC BY-SA / Gnu FDL

    Why MariaDB uses InnoDB instead of XtraDB from MariaDB 10.2

    Default Row Format

    Dynamic

    InnoDB Row Formats InnoDB Dynamic Row Format

    ACID-compliant

    Yes

    XA Transactions

    Yes

    Primary Keys

    Yes

    InnoDB Primary Keys

    Auto-Increment

    Yes

    InnoDB AUTO_INCREMENT Columns

    Sequences

    Yes

    InnoDB Sequences

    Foreign Keys

    Yes

    InnoDB Foreign Keys

    Indexes

    Yes

    InnoDB Indexes

    Secondary Indexes

    Yes

    InnoDB Secondary Indexes

    Unique Indexes

    Yes

    InnoDB Unique Indexes

    Full-text Search

    Yes

    InnoDB Full-text Indexes

    Spatial Indexes

    Yes

    InnoDB Spatial Indexes

    Compression

    Yes

    Configure InnoDB Page Compression

    Data-at-Rest Encryption

    Yes

    High Availability (HA)

    Yes

    • MariaDB Replication •

    Main Memory Caching

    Yes

    InnoDB Buffer Pool

    Transaction Logging

    Yes

    • InnoDB Redo Log (Crash Safety) • InnoDB Undo Log (MVCC)

    Garbage Collection

    Yes

    InnoDB Purge Threads

    Online Schema changes

    Yes

    InnoDB Schema Changes

    Non-locking Reads

    Yes

    Row Locking

    Yes

    Redo Log
    Row Formats
    Undo Log
    Configure the Redo Log
    Configure the Undo Log
    Schema Changes
    MariaDB Enterprise Server

    CONNECT System Variables

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    This page documents system variables related to the . See for instructions on setting them.

    See also the .

    CONNECT DBF Table Type

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Overview

    A table of type DBF is physically a dBASE III or IV formatted file (used by many products like dBASE, Xbase, FoxPro etc.). This format is similar to the

    InnoDB Page Flushing

    Learn about the background processes that flush dirty pages from the buffer pool to disk, including adaptive flushing algorithms to optimize I/O.

    Page Flushing with InnoDB Page Cleaner Threads

    InnoDB page cleaner threads flush dirty pages from the . These dirty pages are flushed using a least-recently used (LRU) algorithm.

    [mariadb]
    ...
    innodb_log_file_size=2G
    SET GLOBAL innodb_log_file_size=(2 * 1024 * 1024 * 1024);
    
    SHOW GLOBAL VARIABLES
       LIKE 'innodb_log_file_size';
    +----------------------+------------+
    | Variable_name        | Value      |
    +----------------------+------------+
    | innodb_log_file_size | 2147483648 |
    +----------------------+------------+
    ---
    LOG
    ---
    Log sequence number 252794398789379
    Log flushed up to 252794398789379
    Pages flushed up to 252792767756840
    Last checkpoint at 252792767756840
    0 pending log flushes, 0 pending chkp writes
    23930412 log i/o's done, 2.03 log i/o's/second
    CREATE TABLE xchild (
      mother CHAR(12) NOT NULL,
      child CHAR(12) DEFAULT NULL flag=2
    ) ENGINE=CONNECT table_type=XCOL tabname='chlist'
    option_list='colname=child';
    SELECT * FROM xchild;
    SELECT * FROM xchild WHERE substr(child,1,1) = 'A';
    SELECT mother FROM xchild;
    SELECT COUNT(*) FROM xchild;      -- returns 5
    SELECT COUNT(child) FROM xchild;  -- returns 10
    SELECT COUNT(mother) FROM xchild; -- returns 5
    SELECT mother, COUNT(*) FROM xchild GROUP BY mother;
    SELECT mother, COUNT(child) FROM xchild GROUP BY mother;
    CREATE TABLE xchild2 (
    rank INT NOT NULL SPECIAL=ROWID,
    mother CHAR(12) NOT NULL,
    child CHAR(12) NOT NULL flag=2
    ) ENGINE=CONNECT table_type=XCOL tabname='chlist' option_list='colname=child';
    SELECT mother, child FROM xchild2 WHERE rank = 1 ;
    SELECT mother FROM xchild2 WHERE rank > 2;
    SELECT mother FROM xchild2 GROUP BY mother HAVING COUNT(child) > 2;
    CREATE TABLE xsvars ENGINE=CONNECT table_type=XCOL
    srcdef='show variables like "optimizer_switch"'
    option_list='Colname=Value';
    SELECT value FROM xsvars LIMIT 10;

    Claude

    Marc

    Janet

    Arthur

    Janet

    Sandra

    Janet

    Peter

    Janet

    John

    3

    Lisbeth

    Diana

    1

    Claude

    Marc

    1

    Janet

    Arthur

    2

    Janet

    Sandra

    3

    Janet

    Peter

    4

    Janet

    John

    innodb_max_dirty_pages_pct

    The innodb_max_dirty_pages_pct variable specifies the maximum percentage of unwritten (dirty) pages in the buffer pool. If this percentage is exceeded, flushing will take place.

    innodb_max_dirty_pages_pct_lwm

    The innodb_max_dirty_pages_pct_lwm variable determines the low-water mark percentage of dirty pages that will enable preflushing to lower the dirty page ratio. The value 0 (the default) means that there are no separate background flushing so long as:

    • the share of dirty pages does not exceed innodb_max_dirty_pages_pct

    • the last checkpoint age (LSN difference since the latest checkpoint) does not exceed innodb_log_file_size (minus some safety margin)

    • the buffer pool is not running out of space, which could trigger eviction flushing

    To make flushing more eager, set to a higher value, for example SET GLOBAL innodb_max_dirty_pages_pct_lwm=0.001;

    Page Flushing with Multiple InnoDB Page Cleaner Threads

    The innodb_page_cleaners system variable makes it possible to use multiple InnoDB page cleaner threads. It is deprecated and ignored now as the original reasons for splitting the buffer pool have mostly gone away.

    The number of InnoDB page cleaner threads can be configured by setting the innodb_page_cleaners system variable. The system variable can be set in a server option group in an option file prior to starting up the server:

    The system variable can be changed dynamically with SET GLOBAL:

    This system variable's default value is either 4 or the configured value of the innodb_buffer_pool_instances system variable, whichever is lower.

    Page Flushing with a Single InnoDB Page Cleaner Thread

    Since the original reasons for splitting the buffer pool have mostly gone away, only a single InnoDB page cleaner thread is supported.

    Page Flushing with Multi-threaded Flush Threads

    InnoDB's multi-thread flush feature can be enabled by setting the innodb_use_mtflush system variable. The number of threads cane be configured by setting the innodb_mtflush_threads system variable. This system variable can be set in a server option group in an option file prior to starting up the server:

    The innodb_mtflush_threads system variable's default value is 8. The maximum value is 64. In multi-core systems, it is recommended to set its value close to the configured value of the innodb_buffer_pool_instances system variable. However, it is also recommended to use your own benchmarks to find a suitable value for your particular application.

    Note

    InnoDB's multi-thread flush feature is deprecated. Use multiple InnoDB page cleaner threads instead.

    Configuring the InnoDB I/O Capacity

    Increasing the amount of I/O capacity available to InnoDB can also help increase the performance of page flushing.

    The amount of I/O capacity available to InnoDB can be configured by setting the innodb_io_capacity system variable. This system variable can be changed dynamically with SET GLOBAL:

    This system variable can also be set in a server option group in an option file prior to starting up the server:

    The maximum amount of I/O capacity available to InnoDB in an emergency defaults to either 2000 or twice innodb_io_capacity, whichever is higher, or can be directly configured by setting the innodb_io_capacity_max system variable. This system variable can be changed dynamically with SET GLOBAL:

    This system variable can also be set in a server option group in an option file prior to starting up the server:

    See Also

    • Significant performance boost with new MariaDB page compression on FusionIO

    This page is licensed: CC BY-SA / Gnu FDL

    InnoDB buffer pool
    [mariadb]
    ...
    innodb_page_cleaners=8
    SET GLOBAL innodb_page_cleaners=8;
    [mariadb]
    ...
    innodb_use_mtflush = ON
    innodb_mtflush_threads = 8
    SET GLOBAL innodb_io_capacity=20000;
    [mariadb]
    ...
    innodb_io_capacity=20000
    SET GLOBAL innodb_io_capacity_max=20000;
    [mariadb]
    ...
    innodb_io_capacity_max=20000
    connect_class_path
    • Description: Java class path

    • Command line: --connect-class-path=value

    • Scope: Global

    • Dynamic:

    • Data Type: string

    • Default Value:

    connect_cond_push

    • Description: Enable condition pushdown

    • Command line: --connect-cond-push={0|1}

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: boolean

    • Default Value: ON

    connect_conv_size

    • Description: The size of the VARCHAR created when converting from a TEXT type. See connect_type_conv.

    • Command line: --connect-conv-size=#

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value:

      • = : 1024

      • <= : 8192

    • Range: 0 to 65500

    connect_default_depth

    • Description: Default depth used by Json, XML and Mongo discovery.

    • Command line: --connect-default-depth=#

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value:5

    • Range: -1 to 16

    • Introduced: ,

    connect_default_prec

    • Description: Default precision used for doubles.

    • Command line: --connect-default-prec=#

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value:6

    • Range: 0 to 16

    • Introduced: ,

    connect_enable_mongo

    • Description: Enable the Mongo table type.

    • Command line: --connect-enable-mongo={0|1}

    • Scope: Global, Session

    • Dynamic:

    • Data Type: boolean

    • Default Value: OFF

    • Introduced: ,

    • Removed:

    connect_exact_info

    • Description: Whether the CONNECT engine should return an exact record number value to information queries. It is OFF by default because this information can take a very long time for large variable record length tables or for remote tables, especially if the remote server is not available. It can be set to ON when exact values are desired, for instance when querying the repartition of rows in a partition table.

    • Command line: --connect-exact-info={0|1}

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: boolean

    • Default Value: OFF

    connect_force_bson

    • Description: Force using BSON for JSON tables. Starting with these releases, the internal way JSON was parsed and handled was changed. The main advantage of the new way is to reduce the memory required to parse JSON (from 6 to 10 times the size of the JSON source to now only 2 to 4 times). However, this is in Beta mode and JSON tables are still handled using the old mode. To use the new mode, tables should be created with TABLE_TYPE=BSON, or by setting this session variable to 1 or ON. Then, all JSON tables are handled as BSON. This is temporary until the new way replaces the old way by default.

    • Command line: --connect-force-bson={0|1}

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: boolean

    • Default Value: OFF

    • Introduced: ,

    connect_indx_map

    • Description: Enable file mapping for index files. To accelerate the indexing process, CONNECT makes an index structure in memory from the index file. This can be done by reading the index file or using it as if it was in memory by “file mapping”. Set to 0 (file read, the default) or 1 (file mapping).

    • Command line: --connect-indx-map=#

    • Scope: Global

    • Dynamic: Yes

    • Data Type: boolean

    • Default Value: OFF

    connect_java_wrapper

    • Description: Java wrapper.

    • Command line: --connect-java-wrapper=val

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: string

    • Default Value: wrappers/JdbcInterface

    connect_json_all_path

    • Description: Discovery to generate json path for all columns if ON (the default) or do not when the path is the column name.

    • Command line: --connect-json-all-path={0|1}

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: numeric

    • Data Type: boolean

    • Default Value: ON

    • Introduced: ,

    connect_json_grp_size

    • Description: Max number of rows for JSON aggregate functions.

    • Command line: --connect-json-grp-size=#

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: 50 (>= Connect 1.7.0003), 10 (<= Connect 1.7.0002)

    • Range: 1 to 2147483647

    connect_json_null

    • Description: Representation of JSON null values.

    • Command line: --connect-json-null=value

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: string

    • Default Value: <null>

    connect_jvm_path

    • Description: Path to JVM library.

    • Command line: --connect-jvm_path=value

    • Scope: Global

    • Dynamic:

    • Data Type: string

    • Default Value:

    connect_type_conv

    • Description: Determines the handling of TEXT columns.

      • NO: The default until Connect 1.06.005, no conversion takes place, and a TYPE_ERROR is returned, resulting in a “not supported” message.

      • YES: The default from Connect 1.06.006. The column is internally converted to a column declared as VARCHAR(n), n being the value of connect_conv_size.

      • FORCE (>= Connect 1.06.006): Also convert ODBC blob columns to TYPE_STRING.

      • SKIP: No conversion. When the column declaration is provided via Discovery (meaning the CONNECT table is created without a column description), this column is not generated. Also applies to ODBC tables.

    • Command line: --connect-type-conv=#

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: enum

    • Valid Values: NO, YES, FORCE or SKIP

    • Default Value: YES

    connect_use_tempfile

    • Description:

      • NO: The first algorithm is always used. Because it can cause errors when updating variable record length tables, this value should be set only for testing.

      • AUTO: This is the default value. It leaves CONNECT to choose the algorithm to use. Currently it is equivalent to NO, except when updating variable record length tables (DOS, CSV or FMT) with file mapping forced to OFF.

      • YES: Using a temporary file is chosen with some exceptions. These are when file mapping is ON, for tables and when deleting from tables (soft delete). For variable record length tables, file mapping is forced to OFF.

      • FORCE: Like YES but forces file mapping to be OFF for all table types.

      • TEST: Reserved for CONNECT development.

    • Command line: --connect-use-tempfile=#

    • Scope: Session

    • Dynamic: Yes

    • Data Type: enum

    • Default Value: AUTO

    connect_work_size

    • Description: Size of the CONNECT work area used for memory allocation. Permits allocating a larger memory sub-allocation space when dealing with very large if sub-allocation fails. If the specified value is too big and memory allocation fails, the size of the work area remains but the variable value is not modified and should be reset.

    • Command line: --connect-work-size=#

    • Scope: Global, Session (Session-only from CONNECT 1.03.005)

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: 67108864

    • Range: 4194304 upwards, depending on the physical memory size

    connect_xtrace

    • Description: Console trace value. Set to 0 (no trace), or to other values if a console tracing is desired. Note that to test this handler, MariaDB should be executed with the --console parameter because CONNECT prints some error and trace messages on the console. In some Linux versions, this is re-routed into the error log file. Console tracing can be set on the command line or later by names or values. Valid values (from Connect 1.06.006) include:

      • 0: No trace

      • YES or 1: Basic trace

      • MORE or 2: More tracing

      • INDEX or 4: Index construction

      • MEMORY or 8: Allocating and freeing memory

      • SUBALLOC or 16: Sub-allocating in work area

      • QUERY or 32: Constructed query sent to external server

      • STMT or 64: Currently executing statement

      • HANDLER or 128: Creating and dropping CONNECT handlers

      • BLOCK or 256: Creating and dropping CONNECT objects

      • MONGO or 512: Mongo and REST (from ) tracing :

      • set global connect_xtrace=0; No trace

      • set global connect_xtrace='YES'; By name

      • set global connect_xtrace=1; By value

      • set global connect_xtrace='QUERY,STMT'; By name

      • set global connect_xtrace=96; By value

      • set global connect_xtrace=1023; Trace all

    • Command line: --connect-xtrace=#

    • Scope: Global

    • Dynamic: Yes

    • Data Type: set

    • Default Value: 0

    • Valid Values: See description

    This page is licensed: CC BY-SA / Gnu FDL

    CONNECT storage engine
    Server System Variables
    Full list of MariaDB options, system and status variables
    type format with in addition a prefix giving the characteristics of the file, describing in particular all the fields (columns) of the table.

    Because DBF files have a header that contains Meta data about the file, in particular the column description, it is possible to create a table based on an existing DBF file without giving the column description, for instance:

    To see what CONNECT has done, you can use the DESCRIBE or SHOW CREATE TABLE commands, and eventually modify some options with the ALTER TABLE command.

    The case of deleted lines is handled in a specific way for DBF tables. Deleted lines are not removed from the file but are "soft deleted" meaning they are marked as deleted. In particular, the number of lines contained in the file header does not take care of soft deleted lines. This is why if you execute these two commands applied to a DBF table named tabdbf:

    They can give a different result, the (fast) first one giving the number of physical lines in the file and the second one giving the number of line that are not (soft) deleted.

    The commands UPDATE, INSERT, and DELETE can be used with DBF tables. The DELETE command marks the deleted lines as suppressed but keeps them in the file. The INSERT command, if it is used to populate a newly created table, constructs the file header before inserting new lines.

    Note: For DBF tables, column name length is limited to 11 characters and field length to 256 bytes.

    Conversion of dBASE Data Types

    CONNECT handles only types that are stored as characters.

    Symbol
    DBF Type
    CONNECT Type
    Description

    B

    Binary (string)

    TYPE_STRING

    10 digits representing a .DBT block number.

    C

    Character

    TYPE_STRING

    All OEM code page characters - padded with blanks to the width of the field.

    D

    Date

    TYPE_DATE

    For the N numeric type, CONNECT converts it to TYPE_DOUBLE if the decimals value is not 0, to TYPE_BIGINT if the length value is greater than 10, else to TYPE_INT.

    For M, B, and G types, CONNECT just returns the DBT number.

    Reading soft deleted lines of a DBF table

    It is possible to read these lines by changing the read mode of the table. This is specified by an option READMODE that can take the values:

    0

    Standard mode. This is the default option.

    1

    Read all lines including soft deleted ones.

    2

    Read only the soft deleted lines.

    For example, to read all lines of the tabdbf table, you can do:

    To come back to normal mode, specify READMODE=0.

    This page is licensed: CC BY-SA / Gnu FDL

    FIX

    InnoDB 5.7.18

    InnoDB 5.7.14

    InnoDB 5.6.36

    InnoDB 5.6.35

    InnoDB 5.6.33

    InnoDB 5.6.32

    InnoDB 5.6.31

    InnoDB 5.6.30

    InnoDB 5.6.29

    InnoDB 5.6.36

    InnoDB 5.6.35

    InnoDB 5.6.33

    InnoDB 5.6.32

    InnoDB 5.6.31

    InnoDB 5.6.30

    InnoDB 5.6.29

    InnoDB 5.6.28

    InnoDB 5.6.27

    InnoDB 5.6.26

    InnoDB 5.6.25

    InnoDB 5.6.24

    InnoDB 5.6.23

    InnoDB 5.6.22

    InnoDB 5.6.21

    InnoDB 5.6.20

    InnoDB 5.6.19

    InnoDB 5.6.17

    InnoDB 5.6.15

    InnoDB 5.6.14

    innodb_log_group_home_dir
    datadir
    Configure the InnoDB Redo Log
    innodb_log_file_size
    Configure the InnoDB Redo Log

    CONNECT Create Table Options

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Create Table statements for “CONNECT” tables are standard MariaDB create statements specifyingengine=CONNECT. There are a few additional table and column options specific to CONNECT.

    Table Options

    Table Option
    Type
    Description

    All integers in the above table are unsigned big integers.

    Because CONNECT handles many table types; many table type specific options are not in the above list and must be entered using the OPTION_LIST option. The syntax to use is:

    Be aware that until Connect 1.5.5, no blanks should be inserted before or after the '=' and ',' characters. The option name is all that is between the start of the string or the last ',' character and the next '=' character, and the option value is all that is between this '=' character and the next ',' or end of string. For instance:

    This defines four options, 'name', 'coltype', 'attribute', and 'headattr'; with values 'TABLE', 'HTML', 'border=1;cellpadding=5', and 'bgcolor=yellow', respectively. The only restriction is that values cannot contain commas, but they can contain equal signs.

    Column Options

    Column Option
    Type
    Description
    • The MAX_DIST and DISTRIB column options are used for block indexing.

    • All integers in the above table are unsigned big integers.

    • JPATH and XPATH were added to make CREATE TABLE statements more readable, but they do the same thing as FIELD_FORMAT and any of them can be used with the same result.

    Index Options

    Index Option
    Type
    Description

    Note 1: Creating a CONNECT table based on file does not erase or create the file if the file name is specified in the CREATE TABLE statement ( table). If the file does not exist, it are populated by subsequent INSERT or LOAD commands or by the “AS select statement” of the command. Unlike the CSV engine, CONNECT easily permits the creation of tables based on already existing files, for instance files made by other applications. However, if the file name is not specified, a file with a name defaulting totablename.tabletype are created in the data directory ( table).

    Note 2: Dropping a CONNECT table is done with a standard DROP statement. For , this drops only the CONNECT table definition but does not erase the corresponding data file and index files. Use DELETE orTRUNCATE to do so. This is contrary to data and index files of are erased on DROP like for other MariaDB engines.

    This page is licensed: GPLv2

    InnoDB Strict Mode

    InnoDB Strict Mode enforces stricter SQL compliance, returning errors instead of warnings for invalid CREATE TABLE options or potential data loss.

    InnoDB strict mode is similar to SQL strict mode. When it is enabled, certain InnoDB warnings become errors instead.

    Configuring InnoDB Strict Mode

    InnoDB strict mode is enabled by default.

    InnoDB strict mode can be enabled or disabled by configuring the innodb_strict_mode server system variable.

    Its global value can be changed dynamically with SET GLOBAL:

    Its value for the current session can also be changed dynamically with SET SESSION:

    It can also be set in a server in an prior to starting up the server:

    InnoDB Strict Mode Errors

    Wrong Create Options

    If InnoDB strict mode is enabled, and if a DDL statement is executed and invalid or conflicting are specified, then an error is raised. The error will only be a generic error that says the following:

    However, more details about the error can be found by executing .

    For example, the error is raised in the following cases:

    • The table option is set to a non-zero value, but the table option is set to some row format other than the row format:

    • The table option is set to a non-zero value, but the configured value is larger than either 16 or the value of the system variable, whichever is smaller.

    • The table option is set to a non-zero value, but the system variable is not set to ON.

    • The table option is set to a non-zero value, but it is not set to one of the supported values: [1, 2, 4, 8, 16].

    • The table option is set to the row format, but the system variable is not set to ON.

    • The table option is set to a value, but it is not set to one of the values supported by InnoDB: , , , and .

    • Either the table option is set to a non-zero value or the table option is set to the row format, but the system variable is set to a value greater than 16k.

    • The table option is set, but the system variable is not set to ON.

    • The table option is set, but the table is a .

    • The table option is set.

    • The table option is set to 1, so is enabled, but the table option is set to some row format other than the or row formats.

    • The table option is set to 1, so is enabled, but the system variable is not set to ON.

    • The table option is set to 1, so is enabled, but the table option is also specified.

    • The table option is set, but the table option is set to 0, so is disabled.

    COMPRESSED Row Format

    If InnoDB strict mode is enabled, and if a table uses the row format, and if the table's is too small to contain a row, then an error is returned by the statement.

    Row Size Too Large

    If InnoDB strict mode is enabled, and if a table exceeds its row format's , then InnoDB will return an error.

    See for more information.

    This page is licensed: CC BY-SA / Gnu FDL

    InnoDB Online DDL Overview

    An introduction to InnoDB's online DDL capabilities, detailing the ALGORITHM and LOCK clauses for controlling performance and concurrency during schema changes.

    InnoDB tables support online DDL, which permits concurrent DML and uses optimizations to avoid unnecessary table copying.

    The ALTER TABLE statement supports two clauses that are used to implement online DDL:

    • ALGORITHM - This clause controls how the DDL operation is performed.

    • LOCK - This clause controls how much concurrency is allowed while the DDL operation is being performed.

    Alter Algorithms

    InnoDB supports multiple algorithms for performing DDL operations. This offers a significant performance improvement over previous versions. The supported algorithms are:

    • DEFAULT - This implies the default behavior for the specific operation.

    • COPY

    • INPLACE

    Specifying an Alter Algorithm

    The set of alter algorithms can be considered as a hierarchy. The hierarchy is ranked in the following order, with least efficient algorithm at the top, and most efficient algorithm at the bottom:

    • COPY

    • INPLACE

    • NOCOPY

    • INSTANT

    When a user specifies an alter algorithm for a DDL operation, MariaDB does not necessarily use that specific algorithm for the operation. It interprets the choice in the following way:

    • If the user specifies COPY, then InnoDB uses the COPY algorithm.

    • If the user specifies any other algorithm, then InnoDB interprets that choice as the least efficient algorithm that the user is willing to accept. This means that if the user specifies INPLACE, then InnoDB will use the most efficient algorithm supported by the specific operation from the set (INPLACE, NOCOPY, INSTANT). Likewise, if the user specifies NOCOPY

    There is also a special value that can be specified:

    • If the user specifies DEFAULT, then InnoDB uses its default choice for the operation. The default choice is to use the most efficient algorithm supported by the operation. The default choice will also be used if no algorithm is specified. Therefore, if you want InnoDB to use the most efficient algorithm supported by an operation, then you usually do not have to explicitly specify any algorithm at all.

    Specifying an Alter Algorithm Using the ALGORITHM Clause

    InnoDB supports the clause.

    The clause can be used to specify the least efficient algorithm that the user is willing to accept. It is supported by the and statements.

    For example, if a user wanted to add a column to a table, but only if the operation used an algorithm that is at least as efficient as the INPLACE, then they could execute the following:

    The above operation should use the INSTANT algorithm, because the ADD COLUMN operation supports the INSTANT algorithm, and the INSTANT algorithm is more efficient than the INPLACE algorithm.

    Specifying an Alter Algorithm Using System Variables

    The system variable can be used to pick the least efficient algorithm that the user is willing to accept.

    For example, if a user wanted to add a column to a table, but only if the operation used an algorithm that is at least as efficient as the INPLACE, then they could execute the following:

    The above operation would actually use the INSTANT algorithm, because the ADD COLUMN operation supports the INSTANT algorithm, and the INSTANT algorithm is more efficient than the INPLACE algorithm. <>

    Supported Alter Algorithms

    The supported algorithms are described in more details below.

    DEFAULT Algorithm

    The default behavior, which occurs if ALGORITHM=DEFAULT is specified, or if ALGORITHM is not specified at all, usually only makes a copy if the operation doesn't support being done in-place at all. In this case, the most efficient available algorithm will usually be used.

    This means that, if an operation supports the INSTANT algorithm, then it will use that algorithm by default. If an operation does not support the INSTANT algorithm, but it does support the NOCOPY algorithm, then it will use that algorithm by default. If an operation does not support the NOCOPY algorithm, but it does support the INPLACE algorithm, then it will use that algorithm by default.

    COPY Algorithm

    The COPY algorithm refers to the original algorithm.

    When the COPY algorithm is used, MariaDB essentially does the following operations:

    This algorithm is very inefficient, but it is generic, so it works for all storage engines.

    If the COPY algorithm is specified with the clause or with the system variable, then the COPY algorithm are used even if it is not necessary. This can result in a lengthy table copy. If multiple operations are required that each require the table to be rebuilt, then it is best to specify all operations in a single statement, so that the table is only rebuilt once.

    Using the COPY Algorithm with InnoDB

    If the COPY algorithm is used with an table, then the following statements apply:

    • The table are rebuilt using the current values of the , , and system variables.

    • The operation will have to create a temporary table to perform the table copy. This temporary table are in the same directory as the original table, and it's file name are in the format #sql${PID}_${THREAD_ID}_${TMP_TABLE_COUNT}, where ${PID} is the process ID of mysqld, ${THREAD_ID} is the connection ID, and ${TMP_TABLE_COUNT} is the number of temporary tables that the connection has open. Therefore, the may contain files with file names like #sql1234_12_1.ibd

    INPLACE Algorithm

    The COPY algorithm can be incredibly slow, because the whole table has to be copied and rebuilt. The INPLACE algorithm was introduced as a way to avoid this by performing operations in-place and avoiding the table copy and rebuild, when possible.

    When the INPLACE algorithm is used, the underlying storage engine uses optimizations to perform the operation while avoiding the table copy and rebuild. However, INPLACE is a bit of a misnomer, since some operations may still require the table to be rebuilt for some storage engines. Regardless, several operations can be performed without a full copy of the table for some storage engines.

    A more accurate name for the algorithm would have been the ENGINE algorithm, since the decides how to implement the algorithm.

    If an operation supports the INPLACE algorithm, then it can be performed using optimizations by the underlying storage engine, but it may rebuilt.

    If the INPLACE algorithm is specified with the clause or with the system variable and if the operation does not support the INPLACE algorithm, then an error are raised:

    In this case, raising an error is preferable, if the alternative is for the operation to make a copy of the table, and perform unexpectedly slowly.

    Using the INPLACE Algorithm with InnoDB

    If the INPLACE algorithm is used with an table, then the following statements apply:

    • The operation might have to write sort files in the directory defined by the system variable.

    • The operation might also have to write a temporary log file to track data changes by executed during the operation. The maximum size for this log file is configured by the system variable.

    • Some operations require the table to be rebuilt, even though the algorithm is inaccurately called "in-place". This includes operations such as adding or dropping columns, adding a primary key, changing a column to , etc.

    Operations Supported by InnoDB with the INPLACE Algorithm

    With respect to the allowed operations, the INPLACE algorithm supports a subset of the operations supported by the COPY algorithm, and it supports a superset of the operations supported by the NOCOPY algorithm.

    See for more information.

    NOCOPY Algorithm

    The NOCOPY algorithm is supported. The INPLACE algorithm can sometimes be surprisingly slow in instances where it has to rebuild the clustered index, because when the clustered index has to be rebuilt, the whole table has to be rebuilt. The NOCOPY algorithm was introduced as a way to avoid this.

    If an operation supports the NOCOPY algorithm, then it can be performed without rebuilding the clustered index.

    If the NOCOPY algorithm is specified with the clause or with the system variable and if the operation does not support the NOCOPY algorithm, then an error are raised:

    In this case, raising an error is preferable, if the alternative is for the operation to rebuild the clustered index, and perform unexpectedly slowly.

    Operations Supported by InnoDB with the NOCOPY Algorithm

    With respect to the allowed operations, the NOCOPY algorithm supports a subset of the operations supported by the INPLACE algorithm, and it supports a superset of the operations supported by the INSTANT algorithm.

    See for more information.

    INSTANT Algorithm

    The INSTANT algorithm is supported. The INPLACE algorithm can sometimes be surprisingly slow in instances where it has to modify data files. The INSTANT algorithm was introduced as a way to avoid this.

    If an operation supports the INSTANT algorithm, then it can be performed without modifying any data files.

    If the INSTANT algorithm is specified with the clause or with the system variable and if the operation does not support the INSTANT algorithm, then an error are raised:

    In this case, raising an error is preferable, if the alternative is for the operation to modify data files, and perform unexpectedly slowly.

    Operations Supported by InnoDB with the INSTANT Algorithm

    With respect to the allowed operations, the INSTANT algorithm supports a subset of the operations supported by the NOCOPY algorithm.

    See for more information.

    Alter Locking Strategies

    InnoDB supports multiple locking strategies for performing DDL operations. This offers a significant performance improvement over previous versions. The supported locking strategies are:

    • DEFAULT - This implies the default behavior for the specific operation.

    • NONE

    • SHARED

    Regardless of which locking strategy is used to perform a DDL operation, InnoDB will have to exclusively lock the table for a short time at the start and end of the operation's execution. This means that any active transactions that may have accessed the table must be committed or aborted for the operation to continue. This applies to most DDL statements, such as , , , , , etc.

    Specifying an Alter Locking Strategy

    Specifying an Alter Locking Strategy Using the LOCK Clause

    The statement supports the clause.

    The clause can be used to specify the locking strategy that the user is willing to accept. It is supported by the and statements.

    For example, if a user wanted to add a column to a table, but only if the operation is non-locking, then they could execute the following:

    If the clause is not explicitly set, then the operation uses LOCK=DEFAULT.

    Specifying an Alter Locking Strategy Using ALTER ONLINE TABLE

    is equivalent to LOCK=NONE. Therefore, the statement can be used to ensure that your operation allows all concurrent DML.

    Supported Alter Locking Strategies

    The supported algorithms are described in more details below.

    To see which locking strategies InnoDB supports for each operation, see the pages that describe which operations are supported for each algorithm:

    DEFAULT Locking Strategy

    The default behavior, which occurs if LOCK=DEFAULT is specified, or if LOCK is not specified at all, acquire the least restrictive lock on the table that is supported for the specific operation. This permits the maximum amount of concurrency that is supported for the specific operation.

    NONE Locking Strategy

    The NONE locking strategy performs the operation without acquiring any lock on the table. This permits all concurrent DML.

    If this locking strategy is not permitted for an operation, then an error is raised.

    SHARED Locking Strategy

    The SHARED locking strategy performs the operation after acquiring a read lock on the table. This permit read-only concurrent DML.

    If this locking strategy is not permitted for an operation, then an error is raised.

    EXCLUSIVE Locking Strategy

    The EXCLUSIVE locking strategy performs the operation after acquiring a write lock on the table. This does not permit concurrent DML.

    This page is licensed: CC BY-SA / Gnu FDL

    Aria System Variables

    A comprehensive list of system variables for configuring Aria, including buffer sizes, log settings, and recovery options.

    This page documents system variables related to the . For options that are not system variables, see .

    See for instructions on setting system variables.

    aria_block_size

    Aria FAQ

    Frequently asked questions about the Aria storage engine, covering its history, comparison with MyISAM, and key features like crash safety.

    This FAQ provides information on the storage engine.

    The Aria storage engine was previously known as Maria, (see, the ). In current releases of , you can refer to the engine as Maria or Aria. As this will change in future releases, please update references in your scripts and automation to use the correct name.

    What is Aria?

    Aria is a storage engine for MySQL® and MariaDB. It was originally developed with the goal of becoming the default transactional and non-transactional storage engine for MariaDB and MySQL.

    CONNECT Table Types - Special "Virtual" Tables

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    The special table types supported by CONNECT are the Virtual table type ( - introduced in ), Directory Listing table type (DIR), the Windows Management Instrumentation Table Type (WMI), and the “Mac Address” type (MAC).

    These tables are “virtual tables”, meaning they have no physical data but rather produce result data using specific algorithms. Note that this is close to what Views are, so they could be regarded as special views.

    About FederatedX

    An introduction to the FederatedX storage engine, a fork of MySQL's Federated engine, allowing access to remote tables as if they were local. This storage engine has been deprecated.

    This storage engine has been deprecated.

    The FederatedX storage engine is a fork of MySQL's , which is no longer being developed by Oracle. The original purpose of FederatedX was to keep this storage engine's development progressing-- to both add new features as well as fix old bugs.

    Since , the storage engine also allows access to a remote database via MySQL or ODBC connection (table types: , ). However, in the current implementation there are several limitations.

    CREATE TABLE cust ENGINE=CONNECT table_type=DBF file_name='cust.dbf';
    SELECT COUNT(*) FROM tabdbf;
    SELECT COUNT(*) FROM tabdbf WHERE 1;
    ALTER TABLE tabdbf option_list='Readmode=1';
    SET GLOBAL innodb_strict_mode=ON;

    8 bytes - date stored as a string in the format YYYYMMDD.

    N

    Numeric

    TYPE_INT TYPE_BIGINT TYPE_DOUBLE

    Number stored as a string, right justified, and padded with blanks to the width of the field.

    L

    Logical

    TYPE_STRING

    1 byte - initialized to 0x20 otherwise T or F.

    M

    Memo (string)

    TYPE_STRING

    10 digits representing a .DBT block number.

    @

    Timestamp

    Not supported

    8 bytes - two longs, first for date, second for time. It is the number of days since 01/01/4713 BC.

    I

    Long

    Not supported

    4 bytes. Leftmost bit used to indicate sign, 0 negative.

    +

    Autoincrement

    Not supported

    Same as a Long

    F

    Float

    TYPE_DOUBLE

    Number stored as a string, right justified, and padded with blanks to the width of the field.

    O

    Double

    Not supported

    8 bytes - no conversions, stored as a double.

    G

    OLE

    TYPE_STRING

    10 digits representing a .DBT block number.

    option group
    option file
    table options
    SHOW WARNINGS
    KEY_BLOCK_SIZE
    ROW_FORMAT
    COMPRESSED
    KEY_BLOCK_SIZE
    innodb_page_size
    KEY_BLOCK_SIZE
    innodb_file_per_table
    KEY_BLOCK_SIZE
    ROW_FORMAT
    COMPRESSED
    innodb_file_per_table
    ROW_FORMAT
    REDUNDANT
    COMPACT
    DYNAMIC
    COMPRESSED
    KEY_BLOCK_SIZE
    ROW_FORMAT
    COMPRESSED
    innodb_page_size
    DATA DIRECTORY
    innodb_file_per_table
    DATA DIRECTORY
    temporary table
    INDEX DIRECTORY
    PAGE_COMPRESSED
    InnoDB page compression
    ROW_FORMAT
    COMPACT
    DYNAMIC
    PAGE_COMPRESSED
    InnoDB page compression
    innodb_file_per_table
    PAGE_COMPRESSED
    InnoDB page compression
    KEY_BLOCK_SIZE
    PAGE_COMPRESSION_LEVEL
    PAGE_COMPRESSED
    InnoDB page compression
    COMPRESSED
    KEY_BLOCK_SIZE
    maximum row size
    Troubleshooting Row Size Too Large Errors with InnoDB
    SET SESSION innodb_strict_mode=ON;
    [mariadb]
    ...
    innodb_strict_mode=ON
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    SET SESSION innodb_strict_mode=ON;
    
    CREATE OR REPLACE TABLE tab (
       id INT PRIMARY KEY,
       str VARCHAR(50)
    )
    KEY_BLOCK_SIZE=4
    ROW_FORMAT=DYNAMIC;
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    
    SHOW WARNINGS;
    +---------+------+--------------------------------------------------------------------+
    | Level   | Code | Message                                                            |
    +---------+------+--------------------------------------------------------------------+
    | Warning | 1478 | InnoDB: cannot specify ROW_FORMAT = DYNAMIC with KEY_BLOCK_SIZE.   |
    | Error   | 1005 | Can't create table `db1`.`tab` (errno: 140 "Wrong create options") |
    | Warning | 1030 | Got error 140 "Wrong create options" from storage engine InnoDB    |
    +---------+------+--------------------------------------------------------------------+
    3 rows in set (0.000 sec)
    SET SESSION innodb_strict_mode=ON;
    
    CREATE OR REPLACE TABLE tab (
       id INT PRIMARY KEY,
       str VARCHAR(50)
    )
    KEY_BLOCK_SIZE=16;
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    
    SHOW WARNINGS;
    +---------+------+--------------------------------------------------------------------+
    | Level   | Code | Message                                                            |
    +---------+------+--------------------------------------------------------------------+
    | Warning | 1478 | InnoDB: KEY_BLOCK_SIZE=16 cannot be larger than 8.                 |
    | Error   | 1005 | Can't create table `db1`.`tab` (errno: 140 "Wrong create options") |
    | Warning | 1030 | Got error 140 "Wrong create options" from storage engine InnoDB    |
    +---------+------+--------------------------------------------------------------------+
    3 rows in set (0.000 sec)
    SET GLOBAL innodb_file_per_table=OFF;
    SET SESSION innodb_strict_mode=ON;
    
    CREATE OR REPLACE TABLE tab (
       id INT PRIMARY KEY,
       str VARCHAR(50)
    )
    KEY_BLOCK_SIZE=4;
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    
    SHOW WARNINGS;
    +---------+------+--------------------------------------------------------------------+
    | Level   | Code | Message                                                            |
    +---------+------+--------------------------------------------------------------------+
    | Warning | 1478 | InnoDB: KEY_BLOCK_SIZE requires innodb_file_per_table.             |
    | Error   | 1005 | Can't create table `db1`.`tab` (errno: 140 "Wrong create options") |
    | Warning | 1030 | Got error 140 "Wrong create options" from storage engine InnoDB    |
    +---------+------+--------------------------------------------------------------------+
    3 rows in set (0.000 sec)
    SET SESSION innodb_strict_mode=ON;
    
    CREATE OR REPLACE TABLE tab (
       id INT PRIMARY KEY,
       str VARCHAR(50)
    )
    KEY_BLOCK_SIZE=5;
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    
    SHOW WARNINGS;
    +---------+------+-----------------------------------------------------------------------+
    | Level   | Code | Message                                                               |
    +---------+------+-----------------------------------------------------------------------+
    | Warning | 1478 | InnoDB: invalid KEY_BLOCK_SIZE = 5. Valid values are [1, 2, 4, 8, 16] |
    | Error   | 1005 | Can't create table `db1`.`tab` (errno: 140 "Wrong create options")    |
    | Warning | 1030 | Got error 140 "Wrong create options" from storage engine InnoDB       |
    +---------+------+-----------------------------------------------------------------------+
    3 rows in set (0.000 sec)
    SET GLOBAL innodb_file_per_table=OFF;
    SET SESSION innodb_strict_mode=ON;
    
    CREATE OR REPLACE TABLE tab (
       id INT PRIMARY KEY,
       str VARCHAR(50)
    )
    ROW_FORMAT=COMPRESSED;
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    
    SHOW WARNINGS;
    +---------+------+--------------------------------------------------------------------+
    | Level   | Code | Message                                                            |
    +---------+------+--------------------------------------------------------------------+
    | Warning | 1478 | InnoDB: ROW_FORMAT=COMPRESSED requires innodb_file_per_table.      |
    | Error   | 1005 | Can't create table `db1`.`tab` (errno: 140 "Wrong create options") |
    | Warning | 1030 | Got error 140 "Wrong create options" from storage engine InnoDB    |
    +---------+------+--------------------------------------------------------------------+
    3 rows in set (0.000 sec)
    SET SESSION innodb_strict_mode=ON;
    
    CREATE OR REPLACE TABLE tab (
       id INT PRIMARY KEY,
       str VARCHAR(50)
    )
    ROW_FORMAT=PAGE;
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    
    SHOW WARNINGS;
    +---------+------+--------------------------------------------------------------------+
    | Level   | Code | Message                                                            |
    +---------+------+--------------------------------------------------------------------+
    | Warning | 1478 | InnoDB: invalid ROW_FORMAT specifier.                              |
    | Error   | 1005 | Can't create table `db1`.`tab` (errno: 140 "Wrong create options") |
    | Warning | 1030 | Got error 140 "Wrong create options" from storage engine InnoDB    |
    +---------+------+--------------------------------------------------------------------+
    3 rows in set (0.000 sec)
    SET SESSION innodb_strict_mode=ON;
    
    CREATE OR REPLACE TABLE tab (
       id INT PRIMARY KEY,
       str VARCHAR(50)
    )
    ROW_FORMAT=COMPRESSED;
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    
    SHOW WARNINGS;
    +---------+------+-----------------------------------------------------------------------+
    | Level   | Code | Message                                                               |
    +---------+------+-----------------------------------------------------------------------+
    | Warning | 1478 | InnoDB: Cannot create a COMPRESSED table when innodb_page_size > 16k. |
    | Error   | 1005 | Can't create table `db1`.`tab` (errno: 140 "Wrong create options")    |
    | Warning | 1030 | Got error 140 "Wrong create options" from storage engine InnoDB       |
    +---------+------+-----------------------------------------------------------------------+
    3 rows in set (0.00 sec)
    SET GLOBAL innodb_file_per_table=OFF;
    SET SESSION innodb_strict_mode=ON;
    
    CREATE OR REPLACE TABLE tab (
       id INT PRIMARY KEY,
       str VARCHAR(50)
    )
    DATA DIRECTORY='/mariadb';
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    
    SHOW WARNINGS;
    +---------+------+--------------------------------------------------------------------+
    | Level   | Code | Message                                                            |
    +---------+------+--------------------------------------------------------------------+
    | Warning | 1478 | InnoDB: DATA DIRECTORY requires innodb_file_per_table.             |
    | Error   | 1005 | Can't create table `db1`.`tab` (errno: 140 "Wrong create options") |
    | Warning | 1030 | Got error 140 "Wrong create options" from storage engine InnoDB    |
    +---------+------+--------------------------------------------------------------------+
    3 rows in set (0.000 sec)
    SET SESSION innodb_strict_mode=ON;
    
    CREATE OR REPLACE TEMPORARY TABLE tab (
       id INT PRIMARY KEY,
       str VARCHAR(50)
    )
    DATA DIRECTORY='/mariadb';
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    
    SHOW WARNINGS;
    +---------+------+--------------------------------------------------------------------+
    | Level   | Code | Message                                                            |
    +---------+------+--------------------------------------------------------------------+
    | Warning | 1478 | InnoDB: DATA DIRECTORY cannot be used for TEMPORARY tables.        |
    | Error   | 1005 | Can't create table `db1`.`tab` (errno: 140 "Wrong create options") |
    | Warning | 1030 | Got error 140 "Wrong create options" from storage engine InnoDB    |
    +---------+------+--------------------------------------------------------------------+
    3 rows in set (0.000 sec)
    SET SESSION innodb_strict_mode=ON;
    
    CREATE OR REPLACE TABLE tab (
       id INT PRIMARY KEY,
       str VARCHAR(50)
    )
    INDEX DIRECTORY='/mariadb';
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    
    SHOW WARNINGS;
    +---------+------+--------------------------------------------------------------------+
    | Level   | Code | Message                                                            |
    +---------+------+--------------------------------------------------------------------+
    | Warning | 1478 | InnoDB: INDEX DIRECTORY is not supported                           |
    | Error   | 1005 | Can't create table `db1`.`tab` (errno: 140 "Wrong create options") |
    | Warning | 1030 | Got error 140 "Wrong create options" from storage engine InnoDB    |
    +---------+------+--------------------------------------------------------------------+
    3 rows in set (0.000 sec)
    SET SESSION innodb_strict_mode=ON;
    
    CREATE OR REPLACE TABLE tab (
       id INT PRIMARY KEY,
       str VARCHAR(50)
    )
    PAGE_COMPRESSED=1
    ROW_FORMAT=COMPRESSED;
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    
    SHOW WARNINGS;
    +---------+------+--------------------------------------------------------------------+
    | Level   | Code | Message                                                            |
    +---------+------+--------------------------------------------------------------------+
    | Warning |  140 | InnoDB: PAGE_COMPRESSED table can't have ROW_TYPE=COMPRESSED       |
    | Error   | 1005 | Can't create table `db1`.`tab` (errno: 140 "Wrong create options") |
    | Warning | 1030 | Got error 140 "Wrong create options" from storage engine InnoDB    |
    +---------+------+--------------------------------------------------------------------+
    3 rows in set (0.000 sec)
    SET GLOBAL innodb_file_per_table=OFF;
    SET SESSION innodb_strict_mode=ON;
    
    CREATE OR REPLACE TABLE tab (
       id INT PRIMARY KEY,
       str VARCHAR(50)
    )
    PAGE_COMPRESSED=1;
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    
    SHOW WARNINGS;
    +---------+------+--------------------------------------------------------------------+
    | Level   | Code | Message                                                            |
    +---------+------+--------------------------------------------------------------------+
    | Warning |  140 | InnoDB: PAGE_COMPRESSED requires innodb_file_per_table.            |
    | Error   | 1005 | Can't create table `db1`.`tab` (errno: 140 "Wrong create options") |
    | Warning | 1030 | Got error 140 "Wrong create options" from storage engine InnoDB    |
    +---------+------+--------------------------------------------------------------------+
    3 rows in set (0.000 sec)
    SET SESSION innodb_strict_mode=ON;
    
    CREATE OR REPLACE TABLE tab (
       id INT PRIMARY KEY,
       str VARCHAR(50)
    )
    PAGE_COMPRESSED=1
    KEY_BLOCK_SIZE=4;
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    
    SHOW WARNINGS;
    +---------+------+--------------------------------------------------------------------+
    | Level   | Code | Message                                                            |
    +---------+------+--------------------------------------------------------------------+
    | Warning |  140 | InnoDB: PAGE_COMPRESSED table can't have key_block_size            |
    | Error   | 1005 | Can't create table `db1`.`tab` (errno: 140 "Wrong create options") |
    | Warning | 1030 | Got error 140 "Wrong create options" from storage engine InnoDB    |
    +---------+------+--------------------------------------------------------------------+
    3 rows in set (0.000 sec)
    SET SESSION innodb_strict_mode=ON;
    
    CREATE OR REPLACE TABLE tab (
       id INT PRIMARY KEY,
       str VARCHAR(50)
    )
    PAGE_COMPRESSED=0
    PAGE_COMPRESSION_LEVEL=9;
    ERROR 1005 (HY000): Can't create table `db1`.`tab` (errno: 140 "Wrong create options")
    
    SHOW WARNINGS;
    +---------+------+--------------------------------------------------------------------+
    | Level   | Code | Message                                                            |
    +---------+------+--------------------------------------------------------------------+
    | Warning |  140 | InnoDB: PAGE_COMPRESSION_LEVEL requires PAGE_COMPRESSED            |
    | Error   | 1005 | Can't create table `db1`.`tab` (errno: 140 "Wrong create options") |
    | Warning | 1030 | Got error 140 "Wrong create options" from storage engine InnoDB    |
    +---------+------+--------------------------------------------------------------------+
    3 rows in set (0.000 sec)
    ERROR 1118 (42000): Row size too large (> 8126). Changing some columns to 
    TEXT OR BLOB may help. IN current row format, BLOB prefix of 0 bytes IS stored inline.

    COMPRESS

    Number

    1 or 2 if the data file is g-zip compressed. Defaults to 0. Before CONNECT 1.05.0001, this was boolean, and true if the data file is compressed.

    CONNECTION

    String

    Specifies the connection of an , or table.

    DATA_CHARSET

    String

    The character set used in the external file or data source.

    DBNAME

    String

    The target database for , , , , and based tables. The database concept is sometimes known as a schema.

    ENGINE

    String

    Must be specfied as CONNECT.

    ENDING

    Integer

    End of line length. Defaults to 1 for Unix/Linux and 2 for Windows.

    FILE_NAME

    String

    The file (path) name for all table types based on files. Can be absolute or relative to the current data directory. If not specified, this is an and a default value is used.

    FILTER

    String

    To filter an external table. Currently MONGO tables only.

    HEADER

    Integer

    Applies to , , and HTML files. Its meaning depends on the table type.

    HTTP

    String

    The HTTP of the client of REST queries. From .

    HUGE

    Boolean

    To specify that a table file can be larger than 2GB. For a table, prevents the result set from being stored in memory.

    LRECL

    Integer

    The file record size (often calculated by default).

    MAPPED

    Boolean

    Specifies whether file mapping is used to handle the table file.

    MODULE

    String

    The (path) name of the DLL or shared lib implementing the access of a non-standard () table type.

    MULTIPLE

    Integer

    Used to specify multiple file tables.

    OPTION_LIST

    String

    Used to specify all other options not yet directly defined.

    QCHAR

    String

    Specifies the character used for quoting some fields of a table or the identifiers of an / tables.

    QUOTED

    Integer

    The level of quoting used in table files.

    READONLY

    Boolean

    True if the data file must not be modified or erased.

    SEP_CHAR

    String

    Specifies the field separator character of a or table. Also, used to specify the Jpath separator for .

    SEPINDEX

    Boolean

    When true, indexes are saved in separate files.

    SPLIT

    Boolean

    True for a table when all columns are in separate files.

    SRCDEF

    String

    The source definition of a table retrieved via , or the MySQL API or used by a table.

    SUBTYPE

    String

    The subtype of an table type.

    TABLE_LIST

    String

    The comma separated list of TBL table sub-tables.

    TABLE_TYPE

    String

    The external table : , , , , , , , , , , , , , , , , , , , , , , , and . Defaults to , , or depending on what options are used.

    TABNAME

    String

    The target table or node for , , , , or ; or the top node name for tables.

    URI

    String

    The URI of a REST request.. From .

    XFILE_NAME

    String

    The file (path) base name for table index files. Can be absolute or relative to the data directory. Defaults to the file name.

    ZIPPED

    Boolean

    True if the table file(s) is/are zipped in one or several zip files.

    FLAG

    Integer

    An integer value whose meaning depends on the table type.

    JPATH

    String

    The Json path of JSON table columns.

    MAX_DIST

    Integer

    Maximum number of distinct values in this column.

    SPECIAL

    String

    The name of the SPECIAL column that set this column value.

    XPATH

    String

    The XML path of XML table columns.

    AVG_ROW_LENGTH

    Integer

    Can be specified to help CONNECT estimate the size of a variable record table length.

    BLOCK_SIZE

    Integer

    The number of rows each block of a FIX, BIN, DBF, or VEC table contains. For an ODBC table this is the RowSet size option. For a JDBC table this is the fetch size.

    CATFUNC

    String

    The catalog function used by a catalog table.

    COLIST

    String

    DATE_FORMAT

    String

    The format indicating how a date is stored in the file.

    DISTRIB

    Enum

    “scattered”, “clustered”, “sorted” (ascending).

    FIELD_FORMAT

    String

    The column format for some table types.

    FIELD_LENGTH

    Integer

    DYNAM

    Boolean

    Set the index as “dynamic”.

    MAPPED

    Boolean

    Use index file mapping.

    “outward”
    CREATE TABLE
    “inward”
    outward tables
    inward tables

    The column list of tables or $project of tables.

    Set the internal field length for DATE columns.

    NOCOPY
  • INSTANT

  • , then InnoDB will use the most efficient algorithm supported by the specific operation from the set (
    NOCOPY
    ,
    INSTANT
    ).
    .
  • The operation inserts one record at a time into each index, which is very inefficient.

  • InnoDB does not use a sort buffer.

  • The table copy operation creates a lot fewer InnoDB undo log writes. See MDEV-11415 for more information.

  • The table copy operation creates a lot of InnoDB redo log writes.

  • If the operation requires the table to be rebuilt, then the operation might have to create temporary tables.
    • It may have to create a temporary intermediate table for the actual table rebuild operation.

      • This temporary table are in the same directory as the original table, and it's file name are in the format #sql${PID}_${THREAD_ID}_${TMP_TABLE_COUNT}, where ${PID} is the process ID of mysqld, ${THREAD_ID} is the connection ID, and ${TMP_TABLE_COUNT} is the number of temporary tables that the connection has open. Therefore, the datadir may contain files with file names like #sql1234_12_1.ibd.

    • When it replaces the original table with the rebuilt table, it may also have to rename the original table using a temporary table name.

      • The system variable is set to OFF, then the format will actually be #sql-ib${TABLESPACE_ID}-${RAND}, where ${TABLESPACE_ID} is the table's tablespace ID within InnoDB and ${RAND} is a randomly initialized number. Therefore, the may contain files with file names like #sql-ib230291-1363966925.ibd.

  • The storage needed for the above items can add up to the size of the original table, or more in some cases.

  • Some operations are instantaneous, if they only require the table's metadata to be changed. This includes operations such as renaming a column, changing a column's DEFAULT value, etc.

  • EXCLUSIVE
    ALGORITHM
    ALGORITHM
    ALTER TABLE
    CREATE INDEX
    alter_algorithm
    ALTER TABLE
    ALGORITHM
    alter_algorithm
    ALTER TABLE
    ALTER TABLE
    InnoDB
    innodb_file_per_table
    innodb_file_format
    innodb_default_row_format
    datadir
    storage engine
    ALTER TABLE
    ALGORITHM
    alter_algorithm
    ALTER TABLE
    InnoDB
    innodb_tmpdir
    DML queries
    innodb_online_alter_log_max_size
    NULL
    InnoDB Online DDL Operations with ALGORITHM=INPLACE
    ALTER TABLE
    ALGORITHM
    alter_algorithm
    ALTER TABLE
    InnoDB Online DDL Operations with ALGORITHM=NOCOPY
    ALTER TABLE
    ALGORITHM
    alter_algorithm
    ALTER TABLE
    InnoDB Online DDL Operations with ALGORITHM=INSTANT
    ALTER TABLE
    CREATE INDEX
    DROP INDEX
    OPTIMIZE TABLE
    RENAME TABLE
    ALTER TABLE
    LOCK
    LOCK
    ALTER TABLE
    CREATE INDEX
    LOCK
    ALTER ONLINE TABLE
    ALTER ONLINE TABLE
    ALTER TABLE
    InnoDB Online DDL Operations with ALGORITHM=INPLACE
    InnoDB Online DDL Operations with ALGORITHM=NOCOPY
    InnoDB Online DDL Operations with ALGORITHM=INSTANT
    Description: Block size to be used for Aria index pages. Changing this requires dumping, deleting old tables and deleting all log files, and then restoring your Aria tables. If key lookups take too long (and one has to search roughly 8192/2 by default to find each key), can be made smaller, e.g. 4096.
  • Command line: --aria-block-size=#

  • Scope: Global

  • Dynamic: No

  • Data Type: numeric

  • Default Value: 8192

  • Range:

    • = : 4096 to 32768 in increments of 1024

    • <= : 1024 to 32768 in increments of 1024

  • aria_checkpoint_interval

    • Description: Interval in seconds between automatic checkpoints. 0 means 'no automatic checkpoints' which makes sense only for testing.

    • Command line: --aria-checkpoint-interval=#

    • Scope: Global

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: 30

    • Range: 0 to 4294967295

    aria_checkpoint_log_activity

    • Description: Number of bytes that the transaction log has to grow between checkpoints before a new checkpoint is written to the log.

    • Command line: aria-checkpoint-log-activity=#

    • Scope: Global

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: 1048576

    • Range 0 to 4294967295

    aria_encrypt_tables

    • Description: Enables automatic encryption of all user-created Aria tables that have the ROW_FORMAT table option set to PAGE. See Data at Rest Encryption and Enabling Encryption for User-created Tables.

    • Command line: aria-encrypt-tables={0|1}

    • Scope: Global

    • Dynamic: Yes

    • Data Type: boolean

    • Default Value: OFF

    aria_force_start_after_recovery_failures

    • Description: Number of consecutive log recovery failures after which logs are automatically deleted to cure the problem; 0 (the default) disables the feature.

    • Command line: --aria-force-start-after-recovery-failures=#

    • Scope: Global

    • Dynamic: No

    • Data Type: numeric

    • Default Value: 0

    aria_group_commit

    • Description: Specifies Aria group commit mode.

    • Command line: --aria_group_commit="value"

    • Alias: maria_group_commit

    • Scope: Global

    • Dynamic: No

    • Data Type: string

    • Valid values:

      • none - Group commit is disabled.

      • hard - Wait the number of microseconds specified by aria_group_commit_interval before actually doing the commit. If the interval is 0 then just check if any other threads have requested a commit during the time this commit was preparing (just before sync() file) and send their data to disk also before sync().

    • Default Value: none

    aria_group_commit_interval

    • Description: Interval between Aria group commits in microseconds (1/1000000 second) for other threads to come and do a commit in "hard" mode and sync()/commit at all in "soft" mode. Option only has effect if aria_group_commit is used.

    • Command line: --aria_group_commit_interval=#

    • Alias: maria_group_commit_interval

    • Scope: Global

    • Dynamic: No

    • Type: numeric

    • Valid Values:

      • Default Value: 0 (no waiting)

      • Range: 0-4294967295

    aria_log_dir_path

    • Description: Path to the directory where transactional log should be stored

    • Command line: --aria-log-dir-path=value

    • Scope: Global

    • Dynamic: No

    • Data Type: string

    • Default Value: Same as DATADIR

    • Introduced: , , (as a system variable, existed as an option only before that)

    aria_log_file_size

    • Description: Limit for Aria transaction log size

    • Command line: --aria-log-file-size=#

    • Scope: Global

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: 1073741824

    aria_log_purge_type

    • Description: Specifies how the Aria transactional log are purged. Set to at_flush to keep a copy of the transaction logs (good as an extra backup). The logs will stay until the next FLUSH LOGS;

    • Command line: --aria-log-purge-type=name

    • Scope: Global

    • Dynamic: Yes

    • Data Type: enumeration

    • Default Value: immediate

    • Valid Values: immediate, external, at_flush

    aria_max_sort_file_size

    • Description: Don't use the fast sort index method to created index if the temporary file would get bigger than this.

    • Command line: --aria-max-sort-file-size=#

    • Scope: Global

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: 9223372036853727232

    • Range: 0 to 9223372036854775807

    aria_page_checksum

    • Description: Determines whether index and data should use page checksums for extra safety. Can be overridden per table with PAGE_CHECKSUM clause in CREATE TABLE.

    • Command line: --aria-page-checksum=#

    • Scope: Global

    • Dynamic: Yes

    • Data Type: boolean

    • Default Value: ON

    aria_pagecache_age_threshold

    • Description: This characterizes the number of hits a hot block has to be untouched until it is considered aged enough to be downgraded to a warm block. This specifies the percentage ratio of that number of hits to the total number of blocks in the page cache.

    • Command line: --aria-pagecache-age-threshold=#

    • Scope: Global

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: 300

    • Range: 100 to 9999900

    aria_pagecache_buffer_size

    • Description: The size of the buffer used for index and data blocks for Aria tables. This can include explicit Aria tables, system tables, and temporary tables. Increase this to get better handling and measure by looking at aria-status-variables/#aria_pagecache_reads (should be small) vs aria-status-variables/#aria_pagecache_read_requests.

    • Command line: --aria-pagecache-buffer-size=#

    • Scope: Global

    • Dynamic: No

    • Data Type: numeric

    • Default Value: 134217728 (128MB)

    • Range: 131072 (128KB) upwards

    aria_pagecache_division_limit

    • Description: The minimum percentage of warm blocks in the key cache.

    • Command line: --aria-pagecache-division-limit=#

    • Scope: Global

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: 100

    • Range: 1 to 100

    aria_pagecache_file_hash_size

    • Description: Number of hash buckets for open and changed files. If you have many Aria files open you should increase this for faster flushing of changes. A good value is probably 1/10th of the number of possible open Aria files.

    • Command line: --aria-pagecache-file-hash-size=#

    • Scope: Global

    • Dynamic: No

    • Data Type: numeric

    • Default Value: 512

    • Range: 128 to 16384

    aria_pagecache_segments

    • Description: The number of segments in the page_cache. Each file is put in their own segments of size pagecache_buffer_size / segments. Having many segments improves parallel performance.

    • Command line: --aria-pagecache-segments=#

    • Scope: Global

    • Dynamic: No

    • Data Type: numeric

    • Default Value: 1

    • Range: 1 to 128

    • Introduced: , ,

    aria_recover

    • Description: aria_recover has been renamed to aria_recover_options in . See aria_recover_options for the description.

    aria_recover_options

    • Description: Specifies how corrupted tables should be automatically repaired. More than one option can be specified, for example FORCE,BACKUP.

      • NORMAL: Normal automatic repair, the default until

      • OFF: Autorecovery is disabled, the equivalent of not using the option

      • QUICK: Does not check rows in the table if there are no delete blocks.

      • FORCE: Runs the recovery even if it determines that more than one row from the data file are lost.

      • BACKUP: Keeps a backup of the data files.

    • Command line: --aria-recover-options[=#]

    • Scope: Global

    • Dynamic: Yes

    • Data Type: enumeration

    • Default Value:

      • BACKUP,QUICK (>= )

      • NORMAL (<= )

    • Valid Values: NORMAL, BACKUP, FORCE, QUICK, OFF

    • Introduced:

    aria_repair_threads

    • Description: Number of threads to use when repairing Aria tables. The value of 1 disables parallel repair. Increasing from the default will usually result in faster repair, but will use more CPU and memory.

    • Command line: --aria-repair-threads=#

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: 1

    aria_sort_buffer_size

    • Description: The buffer that is allocated when sorting the index when doing a REPAIR or when creating indexes with CREATE INDEX or ALTER TABLE.

    • Command line: --aria-sort-buffer-size=#

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: 268434432

    aria_stats_method

    • Description: Determines how NULLs are treated for Aria index statistics purposes. If set to nulls_equal, all NULL index values are treated as a single group. This is usually fine, but if you have large numbers of NULLs the average group size is slanted higher, and the optimizer may miss using the index for ref accesses when it would be useful. If set to nulls_unequal, the default, the opposite approach is taken, with each NULL forming its own group of one. Conversely, the average group size is slanted lower, and the optimizer may use the index for ref accesses when not suitable. Setting to nulls_ignored ignores NULLs altogether from index group calculations. Statistics need to be recalculated after this method is changed. See also Index Statistics, myisam_stats_method and innodb_stats_method.

    • Command line: --aria-stats-method=#

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: nulls_unequal

    • Valid Values: nulls_equal, nulls_unequal, nulls_ignored

    aria_sync_log_dir

    • Description: Controls syncing directory after log file growth and new file creation.

    • Command line: --aria-sync-log-dir=#

    • Scope: Global

    • Dynamic: Yes

    • Data Type: enumeration

    • Default Value: NEWFILE

    • Valid Values: NEWFILE, NEVER, ALWAYS

    aria_used_for_temp_tables

    • Description: Readonly variable indicating whether the Aria storage engine is used for temporary tables. If set to ON, the default, the Aria storage engine is used. If set to OFF, MariaDB reverts to using MyISAM for on-disk temporary tables. The MEMORY storage engine is used for temporary tables regardless of this variable's setting where appropriate. The default can be changed by not using the --with-aria-tmp-tables option when building MariaDB.

    • Command line: No

    • Scope: Global

    • Dynamic: No

    • Data Type: boolean

    • Default Value: ON

    deadlock_search_depth_long

    • Description: Long search depth for the two-step deadlock detection. Only used by the Aria storage engine.

    • Command line: --deadlock-search-depth-long=#

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: 15

    • Range: 0 to 33

    deadlock_search_depth_short

    • Description: Short search depth for the two-step deadlock detection. Only used by the Aria storage engine.

    • Command line: --deadlock-search-depth-short=#

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: 4

    • Range: 0 to 32

    deadlock_timeout_long

    • Description: Long timeout in microseconds for the two-step deadlock detection. Only used by the Aria storage engine.

    • Command line: --deadlock-timeout-long=#

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: 50000000

    • Range: 0 to 4294967295

    deadlock_timeout_short

    • Description: Short timeout in microseconds for the two-step deadlock detection. Only used by the Aria storage engine.

    • Command line: --deadlock-timeout-short=#

    • Scope: Global, Session

    • Dynamic: Yes

    • Data Type: numeric

    • Default Value: 10000

    • Range: 0 to 4294967295

    This page is licensed: CC BY-SA / Gnu FDL

    Aria storage engine
    Aria Options
    Server System Variables
    It has been in development since 2007 and was first announced on Monty's blog. The same core MySQL engineers who developed the MySQL server and the MyISAM, MERGE, and MEMORY storage engines are also working on Aria.

    Why is the engine called Aria?

    Originally, the storage engine was called Maria, after Monty's younger daughter. Monty named MySQL after his first child, My and his second child Max gave his name to MaxDB and the MySQL-Max distributions.

    In practice, having both MariaDB the database server and Maria the storage engine with such similar names proved confusing. To mitigate this, the decision was made to change the name. A Rename Maria contest was held during the first half of 2010 and names were submitted from around the world. Monty picked the name Aria from a short list of finalist. Chris Tooley, who suggested it, received the prize of a Linux-powered System 76 Meerkat NetTop from Monty Program.

    For more information, see the Aria Name.

    What's the goal for the current version?

    The current version of Aria is 1.5. The goal of this release is to develop a crash-safe alternative to MyISAM. That is, when MariaDB restarts after a crash, Aria recovers all tables to the state as of the start of a statement or at the start of the last LOCK TABLES statement.

    The current goal is to keep the code stable and fix all bugs.

    What's the goal for the next version?

    The next version of Aria is 2.0. The goal for this release is to develop a fully transactional storage engine with at least all the major features of InnoDB.

    Currently, Aria 2.0 is on hold as its developers are focusing on improving MariaDB. However, they are interested in working with interested customers and partners to add more features to Aria and eventually release 2.0.

    These are some of the goals for Aria 2.0:

    • ACID compliant

    • Commit/Rollback

    • Concurrent updates/deletes

    • Row locking

    • Group commit (Already in )

    • Faster lookup in index pages (Page directory)

    Beginning in Aria 2.5, the plan is to focus on improving performance.

    What is the ultimate goal of Aria?

    Long term, we have the following goals for Aria:

    • To create a new, ACID and Multi-Version Concurrency Control (MVCC), transactional storage engine that can function as both the default non-transactional and transactional storage engine for MariaDB and MySQL®.

    • To be a MyISAM replacement. This is possible because Aria can also be run in non-transactional mode, supports the same row formats as MyISAM, and supports or will support all major features of MyISAM.

    • To be the default non-transactional engine in MariaDB (instead of MyISAM).

    What are the design goals in Aria?

    • Multi-Version Concurrency Control (MVCC) and ACID storage engine.

    • Optionally non-transactional tables that should be 'as fast and as compact' as MyISAM tables.

    • Be able to use Aria for internal temporary tables in MariaDB (instead of MyISAM).

    • All indexes should have equal speed (clustered index is not on our current road map for Aria. If you need clustered index, you should use XtraDB).

    • Allow 'any' length transactions to work (Having long running transactions will cause more log space to be used).

    • Allow log shipping; that is, you can do incremental backups of Aria tables just by copying the Aria logs.

    • Allow copying of Aria tables between different Aria servers (under some well-defined constraints).

    • Better blob handling (than is currently offered in MyISAM, at a minimum).

    • No memory copying or extra memory used for blobs on insert/update.

    • Blobs allocated in big sequential blocks - Less fragmentation over time.

    • Blobs are stored so that Aria can easily be extended to have access to any part of a blob with a single fetch in the future.

    • Efficient storage on disk (that is, low row data overhead, low page data overhead and little lost space on pages). Note: There is still some more work to succeed with this goal. The disk layout is fine, but we need more in-memory caches to ensure that we get a higher fill factor on the pages.

    • Small footprint, to make MariaDB + Aria suitable for desktop and embedded applications.

    • Flexible memory allocation and scalable algorithms to utilize large amounts of memory efficiently, when it is available.

    Where can I find documentation and help about Aria?

    Documentation is available at Aria and related topics. The project is maintained on GitHub.

    If you want to know what happens or be part of developing Aria, you can subscribe to the developers, docs, or discuss mailing lists.

    To report and check bugs in Aria, see .

    You can usually find some of the Maria developers on our Zulip instance at mariadb.zulipchat.com or on the IRC channel #maria at.

    Who develops Aria?

    The Core Team who develop Aria are:

    Technical lead

    • Michael "Monty" Widenius - Creator of MySQL and MyISAM

    Core Developers (in alphabetical order)

    • Guilhem Bichot - Replication expert, on line backup for MyISAM, etc.

    • Kristian Nielsen - MySQL build tools, NDB, MySQL server

    • Oleksandr Byelkin - Query cache, sub-queries, views.

    • Sergei Golubchik - Server Architect, Full text search, keys for MyISAM-Merge, Plugin architecture, etc.

    All except Guilhem Bichot are working for MariaDB Corporation Ab.

    What is the release policy/schedule of Aria?

    Aria follows the same as for MariaDB. Some clarifications, unique for the Aria storage engine:

    • Aria index and data file formats should be backwards and forwards compatible to ensure easy upgrades and downgrades.

    • The log file format should also be compatible, but we don't make any guarantees yet. In some cases when upgrading, you must remove the old aria_log.% and maria_log.% files before restarting MariaDB. (So far, this has only occurred in the upgrade from and ).

    Extended commitment for Beta 1.5

    • Aria is now feature complete according to specification.

    How does Aria 1.5 Compare to MyISAM?

    Aria 1.0 was basically a crash-safe non-transactional version of MyISAM. Aria 1.5 added more concurrency (multiple inserter) and some optimizations.

    Aria supports all aspects of MyISAM, except as noted below. This includes external and internal check/repair/compressing of rows, different row formats, different index compress formats, aria_chk etc. After a normal shutdown you can copy Aria files between servers.

    Advantages of Aria compared to MyISAM

    • Data and indexes are crash safe.

    • On a crash, changes are rolled back to state of the start of a statement or a last LOCK TABLES statement.

    • Aria can replay almost everything from the log. (Including CREATE, DROP, RENAME, TRUNCATE tables). Therefore, you make a backup of Aria by just copying the log. The things that can't be replayed (yet) are:

      • Batch INSERT into an empty table (This includes LOAD DATA INFILE, SELECT... INSERT and INSERT (many rows)).

      • ALTER TABLE. Note that .frm tables are NOT recreated!

    • LOAD INDEX can skip index blocks for unwanted indexes.

    • Supports all MyISAM ROW formats and new PAGE format where data is stored in pages. (default size is 8K).

    • Multiple concurrent inserters into the same table.

    • When using PAGE format (default) row data is cached by page cache.

    • Aria has unit tests of most parts.

    • Supports both crash-safe (soon to be transactional) and not transactional tables. (Non-transactional tables are not logged and rows uses less space): CREATE TABLE foo (...) TRANSACTIONAL=0|1 ENGINE=Aria.

    • PAGE is the only crash-safe/transactional row format.

    • PAGE format should give a notable speed improvement on systems which have bad data caching. (For example Windows).

    • From , max key length is 2000 bytes, compared to 1000 bytes in MyISAM.

    Differences between Aria and MyISAM

    • Aria uses BIG (1G by default) log files.

    • Aria has a log control file (aria_log_control) and log files (aria_log.%). The log files can be automatically purged when not needed or purged on demand (after backup).

    • Aria uses 8K pages by default (MyISAM uses 1K). This makes Aria a bit faster when using keys of fixed size, but slower when using variable-length packed keys (until we add a directory to index pages).

    Disadvantages of Aria compared to MyISAM

    • Aria doesn't support INSERT DELAYED.

    • Aria does not support multiple key caches.

    • Storage of very small rows (< 25 bytes) are not efficient for PAGE format.

    • MERGE tables don't support Aria (should be very easy to add later).

    • Aria data pages in block format have an overhead of 10 bytes/page and 5 bytes/row. Transaction and multiple concurrent-writer support will use an extra overhead of 7 bytes for new rows, 14 bytes for deleted rows and 0 bytes for old compacted rows.

    • No external locking (MyISAM has external locking, but this is a rarely used feature).

    • Aria has one page size for both index and data (defined when Aria is used the first time). MyISAM supports different page sizes per index.

    • Small overhead (15 bytes) per index page.

    • Aria doesn't support MySQL internal RAID (disabled in MyISAM too, it's a deprecated feature).

    • Minimum data file size for PAGE format is 16K (with 8K pages).

    • Aria doesn't support indexes on virtual fields.

    Differences between release and the normal MySQL-5.1 release?

    See:

    • Aria storage engine

    Why do you use the TRANSACTIONAL keyword now when Aria is not yet transactional?

    In the current development phase Aria tables created with TRANSACTIONAL=1 are crash safe and atomic but not transactional because changes in Aria tables can't be rolled back with the ROLLBACK command. As we planned to make Aria tables fully transactional, we decided it was better to use the TRANSACTIONAL keyword from the start so that applications don't need to be changed later.

    What are the known problems with the MySQL-5.1-Maria release?

    • See KNOWN_BUGS.txt for open/design bugs.

    • See jira.mariadb.org for newly reported bugs. Please report anything you can't find here!

    • If there is a bug in the Aria recovery code or in the code that generates the logs, or if the logs become corrupted, then mysqld may fail to start because Aria can't execute the logs at start up.

    • Query cache and concurrent insert using page row format have a bug, please disable query cache while using page row format and isn't complete

    If Aria doesn't start or you have an unrecoverable table (shouldn't happen):

    • Remove the aria_log.% files from the data directory.

    • Restart mysqld and run CHECK TABLE, REPAIR TABLE or mariadb-check on your Aria tables.

    Alternatively,

    • Remove logs and run aria_chk on your *.MAI files.

    What is going to change in later Aria main releases?

    The LOCK TABLES statement will not start a crash-safe segment. You should use begin and COMMIT instead.

    To make things future safe, you could do this:

    And later you can just remove the LOCK TABLES and UNLOCK TABLES statements.

    How can I create a MyISAM-like (non-transactional) table in Aria?

    Example:

    Note that the rows are not cached in the page cache for FIXED or DYNAMIC format. If you want to have the data cached (something MyISAM doesn't support) you should use ROW_FORMAT=PAGE:

    You can use PAGE_CHECKSUM=1 also for non-transactional tables; This puts a page checksums on all index pages. It also puts a checksum on data pages if you use ROW_FORMAT=PAGE.

    You may still have a speed difference (may be slightly positive or negative) between MyISAM and Aria because of different page sizes. You can change the page size for MariaDB with --aria-block-size=\#, where \

    is 1024, 2048, 4096, 8192, 16384 or 32768.

    Note that if you change the page size you have to dump all your old tables into text (with mariadb-dump) and remove the old Aria log and files:

    What are the advantages/disadvantages of the new PAGE format compared to the old MyISAM-like row formats (DYNAMIC and FIXED)

    The MyISAM-like DYNAMIC and FIXED format are extremely simple and have very little space overhead, so it's hard to beat them for when it comes to simple scanning of unmodified data. The DYNAMIC format does however get notably worse over time if you update the row a lot in a manner that increases the size of the row.

    The advantages of the PAGE format (compared to DYNAMIC or FIXED) for non-transactional tables are:

    • It's cached by the Page Cache, which gives better random performance (as it uses less system calls).

    • Does not fragment as easily as the DYNAMIC format during UPDATE statements. The maximum number of fragments are very low.

    • Code can easily be extended to only read the accessed columns (for example to skip reading blobs).

    • Faster updates (compared to DYNAMIC).

    The disadvantages are:

    • Slight storage overhead (should only be notable for very small row sizes)

    • Slower full table scan time.

    • When using row_format=PAGE, (the default), Aria first writes the row, then the keys, at which point the check for duplicate keys happens. This makes PAGE format slower than DYNAMIC (or MyISAM) if there is a lot of duplicated keys because of the overhead of writing and removing the row. If this is a problem, you can use row_format=DYNAMIC to get same behavior as MyISAM.

    What's the proper way to copy a Aria table from one place to another?

    An Aria table consists of 3 files:

    It's safe to copy all the Aria files to another directory or MariaDB instance if any of the following holds:

    • If you shutdown the MariaDB Server properly with mariadb-admin shutdown, so that there is nothing for Aria to recover when it starts.

    or

    • If you have run a FLUSH TABLES statement and not accessed the table using SQL from that time until the tables have been copied.

    In addition, you must adhere the following rule for transactional tables:

    You can't copy the table to a location within the same MariaDB server if the new table has existed before and the new table is still active in the Aria recovery log (that is, Aria may need to access the old data during recovery). If you are unsure whether the old name existed, run aria_chk --zerofill on the table before you use it.

    After copying a transactional table and before you use the table, we recommend that you run the command:

    This will overwrite all references to the logs (LSN), all transactional references (TRN) and all unused space with 0. It also marks the table as 'movable'. An additional benefit of zerofill is that the Aria files will compress better. No real data is ever removed as part of zerofill.

    Aria will automatically notice if you have copied a table from another system and do 'zerofill' for the first access of the table if it was not marked as 'movable'. The reason for using aria_chk --zerofill is that you avoid a delay in the MariaDB server for the first access of the table.

    Note that this automatic detection doesn't work if you copy tables within the same MariaDB server!

    When is it safe to remove old log files?

    If you want to remove the Aria log files (aria_log.%) with rm or delete, then you must first shut down MariaDB cleanly (for example, with mariadb-admin shutdown) before deleting the old files.

    The same rules apply when upgrading MariaDB; When upgrading, first take down MariaDB in a clean way and then upgrade. This will allow you to remove the old log files if there are incompatible problems between releases.

    Don't remove the aria_log_control file! This is not a log file, but a file that contains information about the Aria setup (current transaction id, unique id, next log file number etc.).

    If you do, Aria will generate a new aria_log_control file at startup and will regard all old Aria files as files moved from another system. This means that they have to be 'zerofilled' before they can be used. This will happen automatically at next access of the Aria files, which can take some time if the files are big.

    If this happens, you will see things like this in your mysqld.err file:

    As part of zerofilling no vital data is removed.

    How does one solve the Missing valid id error?

    See Aria Log Files for details.

    This page is licensed: CC BY-SA / Gnu FDL

    Aria
    Aria Name
    MariaDB
    DIR Type

    A table of type DIR returns a list of file name and description as a result set. To create a DIR table, use a Create Table statement such as:

    When used in a query, the table returns the same file information listing than the system "DIR *.cc" statement would return if executed in the same current directory (here supposedly ..)

    For instance, the query:

    Displays:

    fname
    size
    modified

    handler

    152177

    2011-06-13 18:08:29

    sql_handler

    25321

    2011-06-13 18:08:31

    Note: the important item in this table is the flag option value (set sequentially from 0 by default) because it determines which particular information item is returned in the column:

    Flag value
    Information

    0

    The disk drive (Windows)

    1

    The file path

    2

    The file name

    3

    The file type

    4

    The file attribute

    5

    The file size

    The Subdir option

    When specified in the create table statement, the subdir option indicates to list, in addition to the files contained in the specified directory, all the files verifying the filename pattern that are contained in sub-directories of the specified directory. For instance, using:

    You will get the following result set showing how many tables are created in the MariaDB databases and what is the total length of the FRM files:

    path
    count(*)
    sum(size)

    \CommonSource\mariadb-5.2.7\sql\data\connect\

    30

    264469

    \CommonSource\mariadb-5.2.7\sql\data\mysql\

    23

    207168

    \CommonSource\mariadb-5.2.7\sql\data\test\

    22

    196882

    The Nodir option (Windows)

    The Boolean Nodir option can be set to false (0 or no) to add directories that match the file name pattern from the listed files (it is true by default). This is an addition to CONNECT version 1.6. Previously, directory names matching pattern were listed on Windows. Directories were and are never listed on Linux.

    Note: The way file names are retrieved makes positional access to them impossible. Therefore, DIR tables cannot be indexed or sorted when it is done using positions.

    Be aware, in particular when using the subdir option, that queries on DIR tables are slow and can last almost forever if made on a directory that contains a great number of files in it and its sub-directories.

    dir tables can be used to populate a list of files used to create a multiple=2 table. However, this is not as useful as it was when the multiple 3 did not exist.

    Windows Management Instrumentation Table Type “WMI”

    Note: This table type is available on Windows only.

    WMI provides an operating system interface through which instrumented components provide information. Some Microsoft tools to retrieve information through WMI are the WMIC console command and the WMI CMI Studio application.

    The CONNECT WMI table type enables administrators and operators not capable of scripting or programming on top of WMI to enjoy the benefit of WMI without even learning about it. It permits to present this information as tables that can be queried, transformed, copied in documents or other tables.

    To create a WMI table displaying information coming from a WMI provider, you must provide the namespace and the class name that characterize the information you want to retrieve. The best way to find them is to use the WMI CIM Studio that have tools to browse namespaces and classes and that can display the names of the properties of that class.

    The column names of the tables must be the names (case insensitive) of the properties you want to retrieve. For instance:

    WMI tables returns one row for each instance of the related information. The above example is handy to get the class equivalent of the alias of the WMIC command and also to have a list of many classes commonly used.

    Because most of the useful classes belong to the 'root\cimv2' namespace, this is the default value for WMI tables when the namespace is not specified. Some classes have many properties whose name and type may not be known when creating the table. To find them, you can use the WMI CMI Studio application but his are rarely required because CONNECT is able to retrieve them.

    Actually, the class specification also has default values for some namespaces. For the ‘root\cli’ namespace the class name defaults to ‘Msft_CliAlias’ and for the ‘root_cimv2’ namespace the class default value is ‘Win32_ComputerSystemProduct’. Because many class names begin with ‘Win32_’ it is not necessary to say it and specifying the class as ‘Product’ will effectively use class ‘Win32_Product’.

    For example if you define a table as:

    It will return the information on the current machine, using the class ComputerSystemProduct of the CIMV2 namespace. For instance:

    Will return a result such as:

    Column
    Row 1

    Caption

    Computer system product

    Description

    Computer system product

    IdentifyingNumber

    LXAP50X32982327A922300

    Name

    Aspire 8920

    SKUNumber

    UUID

    00FC523D-B8F7-DC12-A70E-00B0D1A46136

    Note: This is a transposed display that can be obtained with some GUI.

    Getting column information

    An issue, when creating a WMI table, is to make its column definition. Indeed, even when you know the namespace and the class for the wanted information, it is not easy to find what are the names and types of its properties. However, because CONNECT can retrieve this information from the WMI provider, you can simply omit defining columns and CONNECT will do the job.

    Alternatively, you can get this information using a catalog table (see below).

    Performance Consideration

    Some WMI providers can be very slow to answer. This is not an issue for those that return few object instances, such as the ones returning computer, motherboard, or Bios information. They generally return only one row (instance). However, some can return many rows, in particular the "CIM_DataFile" class. This is why care must be taken about them.

    Firstly, it is possible to limit the allocated result size by using the ‘Estimate’ create table option. To avoid result truncation, CONNECT allocates a result of 100 rows that is enough for almost all tables.The 'Estimate' option permits to reduce this size for all classes that return only a few rows, and in some rare case to increase it to avoid truncation.

    However, it is not possible to limit the time taken by some WMI providers to answer, in particular the CIM_DATAFILE class. Indeed the Microsoft documentation says about it:

    "Avoid enumerating or querying for all instances of CIM_DataFile on a computer because the volume of data is likely to either affect performance or cause the computer to stop responding."

    Sure enough, even a simple query such as:

    is prone to last almost forever (probably due to the LIKE clause). This is why, when not asking for some specific items, you should consider using the DIR table type instead.

    Syntax of WMI queries

    Queries to WMI providers are done using the WQL language, not the SQL language. CONNECT does the job of making the WQL query. However, because of the restriction of the WQL syntax, the WHERE clause are generated only when respecting the following restrictions:

    1. No function.

    2. No comparison between two columns.

    3. No expression (currently a CONNECT restriction)

    4. No BETWEEN and IN predicates.

    Filtering with WHERE clauses not respecting these conditions will still be done by MariaDB only, except in the case of CIM_Datafile class for the reason given above.

    However, there is one point that is not covered yet, the syntax used to specify dates in queries. WQL does not recognize dates as number items but translates them to its internal format dates specified as text. Many formats are recognized as described in the Microsoft documentation but only one is useful because common to WQL and MariaDB SQL. Here is an example of a query on a table named "cim" created by:

    The date must be specified with the format in which CIM DATETIME values are stored (WMI uses the date and time formats defined by the Distributed Management Task Force).

    This syntax must be strictly respected. The text has the format:

    It is: year, month, day, hour, minute, second, millisecond, and signed minute deviation from UTC. This format is locale-independent so you can write a query that runs on any machine.

    Note 1: The WMI table type is available only in Windows versions of CONNECT.

    Note 2: WMI tables are read only.

    Note 3: WMI tables are not indexable.

    Note 4: WMI consider all strings as case insensitive.

    MAC Address Table Type “MAC”

    Note: This table type is available on Windows only.

    This type is used to display various general information about the computer and, in particular, about its network cards. To create such a table, the syntax to use is:

    Column names can be freely chosen because their signification, i.e. the values they will display, comes from the specified Flag option. The valid values for Flag are:

    Flag
    Valeur
    Type

    1

    Host name

    varchar(132)

    2

    Domain

    varchar(132)

    3

    DNS address

    varchar(24)

    4

    Node type

    int(1)

    Note: The information of columns having a Flag value less than 10 are unique for the computer, the other ones are specific to the network cards of the computer.

    For instance, you can define the table macaddr as:

    If you execute the query:

    It will return, for example:

    Host
    Address
    IP
    Gateway
    Lease

    OLIVIER

    00-A0-D1-A4-61-36

    0.0.0.0

    0.0.0.0

    1970-01-01 00:00:00

    OLIVIER

    00-1D-E0-9B-90-0B

    192.168.0.10

    192.168.0.254

    2011-09-18 10:28:5

    This page is licensed: GPLv2

    VIR
    What is the FederatedX storage engine?

    The FederatedX Storage Engine is a storage engine that works with both MariaDB and MySQL. Where other storage engines are built as interfaces to lower-level file-based data stores, FederatedX uses libmysql to talk to the data source, the data source being a remote RDBMS. Currently, since FederatedX only uses libmysql, it can only talk to another MariaDB or MySQL RDBMS. The plan is of course to be able to use other RDBMS systems as a data source. There is an existing project Federated ODBC which was able to use PostgreSQL as a remote data source, and it is this type of functionality which are brought to FederatedX in subsequent versions.

    History

    The history of FederatedX is derived from the History of Federated. Cisco needed a MySQL storage engine that would allow them to consolidate remote tables on some sort of routing device, being able to interact with these remote tables as if they were local to the device, but not actually on the device, since the routing device had only so much storage space. The first prototype of the Federated Storage Engine was developed by JD (need to check on this- Brian Aker can verify) using the HANDLER interface. Brian handed the code to Patrick Galbraith and explained how it needed to work, and with Brian and Monty's tutelage and Patrick had a working Federated Storage Engine with MySQL 5.0. Eventually, Federated was released to the public in a MySQL 5.0 release.

    When MySQL 5.1 became the production release of MySQL, Federated had more features and enhancements added to it, namely:

    • New Federated SERVER added to the parser. This was something Cisco needed that made it possible to change the connection parameters for numerous Federated tables at once without having to alter or re-create the Federated tables.

    • Basic Transactional support-- for supporting remote transactional tables

    • Various bugs that needed to be fixed from MySQL 5.0

    • Plugin capability

    In FederatedX got support for assisted table discovery.

    Installing the Plugin

    Although the plugin's shared library is distributed with MariaDB by default, the plugin is not actually installed by MariaDB by default. There are two methods that can be used to install the plugin with MariaDB.

    The first method can be used to install the plugin without restarting the server. You can install the plugin dynamically by executing INSTALL SONAME or INSTALL PLUGIN:

    The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the --plugin-load or the --plugin-load-add options. This can be specified as a command-line argument to mariadbd or it can be specified in a relevant server option group in an option file:

    Uninstalling the Plugin

    You can uninstall the plugin dynamically by executing UNINSTALL SONAME or UNINSTALL PLUGIN:

    If you installed the plugin by providing the --plugin-load or the --plugin-load-add options in a relevant server option group in an option file, then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.

    How FederatedX works

    Every storage engine has to implement derived standard handler class API methods for a storage engine to work. FederatedX is no different in that regard. The big difference is that FederatedX needs to implement these handler methods in such as to construct SQL statements to run on the remote server and if there is a result set, process that result set into the internal handler format so that the result is returned to the user.

    Internal workings of FederatedX

    Normal database files are local and as such: You create a table called 'users', a file such as 'users.MYD' is created. A handler reads, inserts, deletes, updates data in this file. The data is stored in particular format, so to read, that data has to be parsed into fields, to write, fields have to be stored in this format to write to this data file.

    With the FederatedX storage engine, there are no local files for each table's data (such as .MYD). A foreign database will store the data that would normally be in this file. This will necessitate the use of MySQL client API to read, delete, update, insert this data. The data will have to be retrieve via an SQL call "SELECT * FROM users". Then, to read this data, it will have to be retrieved via mysql_fetch_row one row at a time, then converted from the column in this select into the format that the handler expects.

    The basic functionality of how FederatedX works is:

    • The user issues an SQL statement against the local federatedX table. This statement is parsed into an item tree

    • FederatedX uses the mysql handler API to implement the various methods required for a storage engine. It has access to the item tree for the SQL statement issued, as well as the Table object and each of its Field members. At

    • With this information, FederatedX constructs an SQL statement

    • The constructed SQL statement is sent to the Foreign data source through libmysql using the mysql client API

    • The foreign database reads the SQL statement and sends the result back through the mysql client API to the origin

    • If the original SQL statement has a result set from the foreign data source, the FederatedX storage engine iterates through the result set and converts each row and column to the internal handler format

    • If the original SQL statement only returns the number of rows returned (affected_rows), that number is added to the table stats which results in the user seeing how many rows were affected.

    FederatedX table creation

    The create table will simply create the .frm file, and within theCREATE TABLE SQL statement, there SHALL be any of the following :

    Or using the syntax introduced in MySQL versions 5.1 for a Federated server (SQL/MED Spec xxxx)

    An example of a connect string specifying all the connection parameters would be:

    Or, using a Federated server, first a server is created:

    MariaDB starting with 10.11.12

    You can also use 'mariadb' as a wrapper.

    Then the FederatedX table is created specifying the newly created Federated server. The following statements creates a federated table, federated.t1 against the table db1.t1 on the remote server.

    (Note that in MariaDB, the original Federated storage engine is replaced with the new FederatedX storage engine. And for backward compatibility, the old name "FEDERATED" is used in create table. So in MariaDB, the engine type should be given as "FEDERATED" without an extra "X", not "FEDERATEDX").

    The equivalent of above, if done specifying all the connection parameters

    You can also change the server to point to a new schema:

    All subsequent calls to any FederatedX table using the 'server_one' will now be against tables in db2! Guess what? You no longer have to perform an alter table in order to point one or more FederatedX tables to a new server!

    This connection="connection string" is necessary for the handler to be able to connect to the foreign server, either by URL, or by server name.

    Method calls

    One way to see how the FederatedX storage engine works is to compile a debug build of MariaDB and turn on a trace log. Using a two column table, with one record, the following SQL statements shown below, can be analyzed for what internal methods they result in being called.

    SELECT

    If the query is for instance "SELECT * FROM foo", then the primary methods you would see with debug turned on would be first:

    Then for every row of data retrieved from the foreign database in the result set:

    After all the rows of data that were retrieved, you would see:

    INSERT

    If the query was "INSERT INTO foo (id, ts) VALUES (2, now());", the trace would be:

    UPDATE

    If the query was "UPDATE foo SET ts = now() WHERE id = 1;", the resultant trace would be:

    FederatedX capabilities and limitations

    • Tables MUST be created on the foreign server prior to any action on those tables via the handler, first version. IMPORTANT: IF you MUST use the FederatedX storage engine type on the REMOTE end, make sure that the table you connect to IS NOT a table pointing BACK to your ORIGINAL table! You know and have heard the screeching of audio feedback? You know putting two mirrors in front of each other how the reflection continues for eternity? Well, need I say more?!

    • There is no way for the handler to know if the foreign database or table has changed. The reason for this is that this database has to work like a data file that would never be written to by anything other than the database. The integrity of the data in the local table could be breached if there was any change to the foreign database.

    • Support for SELECT, INSERT, UPDATE, DELETE indexes.

    • No ALTER TABLE, DROP TABLE or any other Data Definition Language calls.

    • Prepared statements will not be used in the first implementation, it remains to be seen whether the limited subset of the client API for the server supports this.

    • This uses SELECT, INSERT, UPDATE, DELETE and not HANDLER for its implementation.

    • This will not work with the query cache.

    • FederatedX does not support types. Such tables cannot be created explicitly, nor discovered.

    How do you use FederatedX?

    To use this handler, it's very simple. You must have two databases running, either both on the same host, or on different hosts.

    First, on the foreign database you create a table, for example:

    Then, on the server that are connecting to the foreign host (client), you create a federated table without specifying the table structure:

    Notice the "ENGINE" and "CONNECTION" fields? This is where you respectively set the engine type, "FEDERATED" and foreign host information, this being the database your 'client' database will connect to and use as the "data file". Obviously, the foreign database is running on port 9306, so you want to start up your other database so that it is indeed on port 9306, and your FederatedX database on a port other than that. In my setup, I use port 5554 for FederatedX, and port 5555 for the foreign database.

    Alternatively (or if you're using MariaDB before version 10.0.2) you specify the federated table structure explicitly:

    In this case the table structure must match exactly the table on the foreign server.

    How to see the storage engine in action

    When developing this handler, I compiled the FederatedX database with debugging:

    Once compiled, I did a 'make install' (not for the purpose of installing the binary, but to install all the files the binary expects to see in the directory I specified in the build with

    Then, I started the foreign server:

    Then, I went back to the directory containing the newly compiled mysqld<builddir>/sql/, started up gdb:

    Then, within the (gdb) prompt:

    Next, I open several windows for each:

    1. Tail the debug trace: tail -f /tmp/mysqld.trace|grep ha_fed

    2. Tail the SQL calls to the foreign database: tail -f /tmp/mysqld.5555.log

    3. A window with a client open to the federatedx server on port 5554

    4. A window with a client open to the federatedx server on port 5555

    I would create a table on the client to the foreign server on port 5555, and then to the FederatedX server on port 5554. At this point, I would run whatever queries I wanted to on the FederatedX server, just always remembering that whatever changes I wanted to make on the table, or if I created new tables, that I would have to do that on the foreign server.

    Another thing to look for is 'show variables' to show you that you have support for FederatedX handler support:

    and:

    Both should display the federatedx storage handler.

    How do I create a federated server?

    A federated server is a way to have a foreign data source defined-- with all connection parameters-- so that you don't have to specify explicitly the connection parameters in a string.

    For instance, if you wanted to connect to a table, first_db.test_table, using this definition:

    You could instead create this with a server:

    You could now specify the server instead of the full URL in the connection string:

    On the server where you create this federated_test_table you will now have access to the table test_table in the first_db database on the remote server found on 192.168.1.123.

    How does FederatedX differ from the old Federated Engine?

    FederatedX from a user point of view is the same for the most part. What is different with FederatedX and Federated is the following:

    • Rewrite of the main Federated source code from one single ha_federated.cc file into three main abstracted components:

      • ha_federatedx.cc - Core implementation of FederatedX

      • federated_io.cc - Parent connection class to be over-ridden by derived classes for each RDBMS/client lib

      • federatated_io_.cc - derived federated_io class for a given RDBMS

      • federated_txn.cc - New support for using transactional engines on the foreign server using a connection poll

    • Various bugs fixed (need to look at opened bugs for Federated)

    Where can I get FederatedX

    FederatedX is part of and later. MariaDB merged with the latest FederatedX when there is a need to get a bug fixed. You can get the latest code/follow/participate in the project from the FederatedX home page.

    What are the plans for FederatedX?

    • Support for other RDBMS vendors using ODBC

    • Support for pushdown conditions

    • Ability to limit result set sizes

    See Also

    • CREATE SERVER

    • ALTER SERVER

    • DROP SERVER

    This page is licensed: CC BY-SA / Gnu FDL

    Federated storage engine
    CONNECT
    MYSQL
    ODBC
    VEC
    DBF
    Connect 1.06.0010

    CONNECT MYSQL Table Type: Accessing MySQL/MariaDB Tables

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    This table type uses libmysql API to access a MySQL or MariaDB table or view. This table must be created on the current server or on another local or remote server. This is similar to what the FederatedX storage engine provides with some differences.

    Currently the Federated-like syntax can be used to create such a table, for instance:

    The connection string can have the same syntax as that used by FEDERATED

    However, it can also be mixed with connect standard options. For instance:

    It can also be specified as a reference to a federated server:

    The pure (deprecated) CONNECT syntax is also accepted:

    The specific connection items are:

    Option
    Default value
    Description
      • When the host is specified as “localhost”, the connection is established on Linux using Linux sockets. On Windows, the connection is established by default using shared memory if it is enabled. If not, the TCP protocol is used. An alternative is to specify the host as “.” to use a named pipe connection (if it is enabled). This makes possible to use these table types with server skipping networking.

    Caution: Take care not to refer to the MYSQL table itself to avoid an infinite loop!

    MYSQL table can refer to the current server as well as to another server. Views can be referred by name or directly giving a source definition, for instance:

    When specified, the columns of the mysql table must exist in the accessed table with the same name, but can be only a subset of them and specified in a different order. Their type must be a type supported by CONNECT and, if it is not identical to the type of the accessed table matching column, a conversion can be done according to the rules given in .

    Note: For columns prone to be targeted by a where clause, keep the column type compatible with the source table column type (numeric or character) to have a correct rephrasing of the where clause.

    If you do not want to restrict or change the column definition, do not provide it and leave CONNECT get the column definition from the remote server. For instance:

    This will create the essai table with the same columns than the people table. If the target table contains CONNECT incompatible type columns, see to know how these columns can be converted or skipped.

    Charset Specification

    When accessing the remote table, CONNECT sets the connection charset set to the default local table charset as the FEDERATED engine does.

    Do not specify a column character set if it is different from the table default character set even when it is the case on the remote table. This is because the remote column is translated to the local table character set when reading it. This is the default but it can be modified by the setting the variable of the target server. If it must keep its setting, for instance to UTF8 when containing Unicode characters, specify the local default charset to its character set.

    This means that it is not possible to correctly retrieve a remote table if it contains columns having different character sets. A solution is to retrieve it by several local tables, each accessing only columns with the same character set.

    Indexing of MYSQL tables

    Indexes are rarely useful with MYSQL tables. This is because CONNECT tries to access only the requested rows. For instance if you ask:

    CONNECT will construct and send to the server the query:

    If the people table is indexed on num, indexing are used on the remote server. This, in all cases, will limit the amount of data to retrieve on the network.

    However, an index can be specified for columns that are prone to be used to join another table to the MYSQL table. For instance:

    If the id column of the remote table addressed by the cnc_tab MYSQL table is indexed (which is likely if it is a key) you should also index the id column of the MYSQL cnc_tab table. If so, using “remote” indexing as does FEDERATED, only the useful rows of the remote table are retrieved during the join process. However, because these rows are retrieved by separate statements, this is useful only when retrieving a few rows of a big table.

    In particular, you should not specify an index for columns not used for joining and above all DO NOT index a joined column if it is not indexed in the remote table. This would cause multiple scans of the remote table to retrieve the joined rows one by one.

    Data Modifying Operations

    The CONNECT MYSQL type supports and and a somewhat limited form of and . These are described below.

    The MYSQL type uses similar methods than the ODBC type to implement the , and commands. Refer to the ODBC chapter for the restrictions concerning them.

    For the and commands, there are fewer restrictions because the remote server being a MySQL server, the syntax of the command are always acceptable by the remote server.

    For instance, you can freely use keywords like IGNORE or LOW_PRIORITY as well as scalar functions in the SET and WHERE clauses.

    However, there is still an issue on multi-table statements. Let us suppose you have a t1 table on the remote server and want to execute a query such as:

    When parsed locally, you will have errors if no t1 table exists or if it does not have the referenced columns. When t1 does not exist, you can overcome this issue by creating a local dummy t1 table:

    This will make the local parser happy and permit to execute the command on the remote server. Note however that having a local MySQL table defined on the remote t1 table does not solve the problem unless it is also names t1 locally.

    This is why, to permit to have all types of commands executed by the data source without any restriction, CONNECT provides a specific MySQL table subtype described now.

    Sending commands to a MariaDB Server

    This can be done like for ODBC or JDBC tables by defining a specific table that are used to send commands and get the result of their execution..

    The key points in this create statement are the EXECSRC option and the column definition.

    The EXECSRC option tells that this table are used to send commands to the MariaDB server. Most of the sent commands do not return result set. Therefore, the table columns are used to specify the command to be executed and to get the result of the execution. The name of these columns can be chosen arbitrarily, their function coming from the FLAG value:

    How to use this table and specify the command to send? By executing a command such as:

    This will send the command specified in the WHERE clause to the data source and return the result of its execution. The syntax of the WHERE clause must be exactly as shown above. For instance:

    This command returns:

    command
    warnings
    number
    message

    Sending several commands in one call

    It can be faster to execute because there are only one connection for all of them. To send several commands in one call, use the following syntax:

    When several commands are sent, the execution stops at the end of them or after a command that is in error. To continue after n errors, set the option maxerr=n (0 by default) in the option list.

    Note 1: It is possible to specify the SRCDEF option when creating an EXECSRC table. It are the command sent by default when a WHERE clause is not specified.

    Note 2: Backslashes inside commands must be escaped. Simple quotes must be escaped if the command is specified between simple quotes, and double quotes if it is specified between double quotes.

    Note 3: Sent commands apply in the specified database. However, they can address any table within this database.

    Note 4: Currently, all commands are executed in mode AUTOCOMMIT.

    Retrieving Warnings and Notes

    If a sent command causes warnings to be issued, it is useless to resend a “show warnings” command because the MariaDB server is opened and closed when sending commands. Therefore, getting warnings requires a specific (and tricky) way.

    To indicate that warning text must be added to the returned result, you must send a multi-command query containing “pseudo” commands that are not sent to the server but directly interpreted by the EXECSRC table. These “pseudo” commands are:

    Note that they must be spelled (case insensitive) exactly as above, no final “s”. For instance:

    This can return something like this:

    command
    warnings
    number
    message

    The execution continued after the command in error because of the MAXERR option. Normally this would have stopped the execution.

    Of course, the last “select” command is useless here because it cannot return the table contain. Another MYSQL table without the EXECSRC option and with proper column definition should be used instead.

    Connection Engine Limitations

    Data types

    There is a maximum key.index length of 255 bytes. You may be able to declare the table without an index and rely on the engine condition pushdown and remote schema.

    The following types can't be used:

    • , , ,

    • , ,

    Note: is allowed. However, the handling depends on the values given to the and system variables, and by default no conversion of TEXT columns is permitted.

    SQL Limitations

    The following SQL queries are not supported

    CONNECT MYSQL versus FEDERATED

    The CONNECT MYSQL table type should not be regarded as a replacement for the engine. The main use of the MYSQL type is to access other engine tables as if they were CONNECT tables. This was necessary when accessing tables from some CONNECT table types such as , , , or that are designed to access CONNECT tables only. When their target table is not a CONNECT table, these types are silently using internally an intermediate MYSQL table.

    However, there are cases where you can use MYSQL CONNECT tables yourself, for instance:

    1. When the table are used by a table. This enables you to specify the connection parameters for each sub-table and is more efficient than using a local FEDERATED sub-table.

    2. When the desired returned data is directly specified by the SRCDEF option. This is great to let the remote server do most of the job, such as grouping and/or joining tables. This cannot be done with the FEDERATED engine.

    3. To take advantage of the push_cond facility that adds a where clause to the command sent to the remote table. This restricts the size of the result set and can be crucial for big tables.

    4. For tables with the EXECSRC option on.

    If you need multi-table updating, deleting, or bulk inserting on a remote table, you can alternatively use the FEDERATED engine or a “send” table specifying the EXECSRC option on.

    See also

    This page is licensed: GPLv2

    Using CONNECT - Partitioning and Sharding

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    CONNECT supports the MySQL/MariaDB partition specification. It is done similar to the way or do by using the PARTITION engine that must be enabled for this to work. This type of partitioning is sometimes referred as “horizontal partitioning”.

    Partitioning enables you to distribute portions of individual tables across a file system according to rules which you can set largely as needed. In effect, different portions of a table are stored as separate tables in different locations. The user-selected rule by which the division of data is accomplished is known as a partitioning function, which in MariaDB can be the modulus, simple matching against a set of ranges or value lists, an internal hashing function, or a linear hashing function.

    CONNECT takes this notion a step further, by providing two types of partitioning:

    ... option_list='opname1=opvalue1,opname2=opvalue2...'
    option_list='name=TABLE,coltype=HTML,attribute=border=1;cellpadding=5,headattr=bgcolor=yellow';
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50)
    );
    
    ALTER TABLE tab ADD COLUMN c VARCHAR(50), ALGORITHM=INPLACE;
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ADD COLUMN c VARCHAR(50);
    -- Create a temporary table with the new definition
    CREATE TEMPORARY TABLE tmp_tab (
    ...
    );
    
    -- Copy the data from the original table
    INSERT INTO tmp_tab
       SELECT * FROM original_tab;
    
    -- Drop the original table
    DROP TABLE original_tab;
    
    -- Rename the temporary table, so that it replaces the original one
    RENAME TABLE tmp_tab TO original_tab;
    SET SESSION alter_algorithm='INPLACE';
    
    ALTER TABLE tab MODIFY COLUMN c INT;
    ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
    SET SESSION alter_algorithm='NOCOPY';
    
    ALTER TABLE tab MODIFY COLUMN c INT;
    ERROR 1846 (0A000): ALGORITHM=NOCOPY is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
    SET SESSION alter_algorithm='INSTANT';
    
    ALTER TABLE tab MODIFY COLUMN c INT;
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50)
    );
    
    ALTER TABLE tab ADD COLUMN c VARCHAR(50), ALGORITHM=INPLACE, LOCK=NONE;
    BEGIN;
    LOCK TABLES ....
    UNLOCK TABLES;
    COMMIT;
    CREATE TABLE t1 (a INT) ROW_FORMAT=FIXED TRANSACTIONAL=0 PAGE_CHECKSUM=0;
    CREATE TABLE t2 (a INT) ROW_FORMAT=DYNAMIC TRANSACTIONAL=0 PAGE_CHECKSUM=0;
    SHOW CREATE TABLE t1;
    SHOW CREATE TABLE t2;
    CREATE TABLE t3 (a INT) ROW_FORMAT=PAGE TRANSACTIONAL=0 PAGE_CHECKSUM=0;
    SHOW CREATE TABLE t3;
    # rm datadir/aria_log*
    XXX.frm : The definition for the table, used by MySQL.
    XXX.MYI : Aria internal information about the structure of the data and index and data for all indexes.
    XXX.MAD : The data.
    $ aria_chk --zerofill table_name
    [Note] Zerofilling moved table: '.\database\xxxx'
    CREATE TABLE SOURCE (
      DRIVE CHAR(2) NOT NULL,
      PATH VARCHAR(256) NOT NULL,
      FNAME VARCHAR(256) NOT NULL,
      FTYPE CHAR(4) NOT NULL,
      SIZE DOUBLE(12,0) NOT NULL flag=5,
      MODIFIED DATETIME NOT NULL)
    ENGINE=CONNECT table_type=DIR file_name='..\\*.cc';
    SELECT fname, SIZE, modified FROM SOURCE
      WHERE fname like '%handler%';
    CREATE TABLE DATA (
      PATH VARCHAR(256) NOT NULL flag=1,
      FNAME VARCHAR(256) NOT NULL,
      FTYPE CHAR(4) NOT NULL,
      SIZE DOUBLE(12,0) NOT NULL flag=5)
    ENGINE=CONNECT table_type=DIR file_name='*.frm'
    option_list='subdir=1';
    
    SELECT PATH, COUNT(*), SUM(SIZE) FROM DATA GROUP BY PATH;
    CREATE TABLE ALIAS (
      friendlyname CHAR(32) NOT NULL,
      target CHAR(50) NOT NULL)
    ENGINE=CONNECT table_type='WMI'
    option_list='Namespace=root\\cli,Class=Msft_CliAlias';
    CREATE TABLE CSPROD ENGINE=CONNECT table_type='WMI';
    SELECT * FROM csprod;
    SELECT COUNT(*) FROM cim WHERE drive = 'D:' AND PATH like '\\MariaDB\\%';
    CREATE TABLE cim (
      Name VARCHAR(255) NOT NULL,
      LastModified DATETIME NOT NULL)
    ENGINE=CONNECT table_type='WMI'
    option_list='class=CIM_DataFile,estimate=5000';
    SELECT * FROM cim WHERE drive = 'D:' AND PATH = '\\PlugDB\\Bin\\'
         and lastmodified > '20120415000000.000000+120';
    yyyymmddHHMMSS.mmmmmmsUUU
    CREATE TABLE tabname (COLUMN definition)
    ENGINE=CONNECT table_type=MAC;
    CREATE TABLE MACADDR (
      Host VARCHAR(132) flag=1,
      Card VARCHAR(132) flag=11,
      Address CHAR(24) flag=12,
      IP CHAR(16) flag=15,
      Gateway CHAR(16) flag=17,
      Lease DATETIME flag=23)
    ENGINE=CONNECT table_type=MAC;
    SELECT host, address, ip, gateway, lease FROM MACADDR;
    INSTALL SONAME 'ha_federatedx';
    [mariadb]
    ...
    plugin_load_add = ha_federatedx
    UNINSTALL SONAME 'ha_federatedx';
    connection=scheme://username:password@hostname:port/database/tablename
    connection=scheme://username@hostname/database/tablename
    connection=scheme://username:password@hostname/database/tablename
    connection=scheme://username:password@hostname/database/tablename
    connection="connection_one"
    connection="connection_one/table_foo"
    connection=mysql://username:password@hostname:port/database/tablename
    CREATE SERVER 'server_one' FOREIGN DATA WRAPPER 'mysql' OPTIONS
      (HOST '127.0.0.1',
      DATABASE 'db1',
      USER 'root',
      PASSWORD '',
      PORT 3306,
      SOCKET '',
      OWNER 'root');
    CREATE TABLE federated.t1 (
      `id` INT(20) NOT NULL,
      `name` VARCHAR(64) NOT NULL DEFAULT ''
      )
    ENGINE="FEDERATED" DEFAULT CHARSET=latin1
    CONNECTION='server_one';
    CONNECTION="mysql://root@127.0.0.1:3306/db1/t1"
    ALTER SERVER 'server_one' OPTIONS(DATABASE 'db2');
    ha_federatedx::info
    ha_federatedx::scan_time:
    ha_federatedx::rnd_init: share->select_query SELECT * FROM foo
    ha_federatedx::extra
    ha_federatedx::rnd_next
    ha_federatedx::convert_row_to_internal_format
    ha_federatedx::rnd_next
    ha_federatedx::rnd_end
    ha_federatedx::extra
    ha_federatedx::reset
    ha_federatedx::write_row
    ha_federatedx::reset
    ha_federatedx::index_init
    ha_federatedx::index_read
    ha_federatedx::index_read_idx
    ha_federatedx::rnd_next
    ha_federatedx::convert_row_to_internal_format
    ha_federatedx::update_row
    
    ha_federatedx::extra
    ha_federatedx::extra
    ha_federatedx::extra
    ha_federatedx::external_lock
    ha_federatedx::reset
    CREATE TABLE federated.test_table (
      id     INT(20) NOT NULL AUTO_INCREMENT,
      name   VARCHAR(32) NOT NULL DEFAULT '',
      other  INT(20) NOT NULL DEFAULT '0',
      PRIMARY KEY  (id),
      KEY name (name),
      KEY other_key (other))
    DEFAULT CHARSET=latin1;
    CREATE TABLE federated_test_table ENGINE=FEDERATED 
      CONNECTION='mysql://root@127.0.0.1:9306/federated/test_table';
    CREATE TABLE federated_test_table (
      id     INT(20) NOT NULL AUTO_INCREMENT,
      name   VARCHAR(32) NOT NULL DEFAULT '',
      other  INT(20) NOT NULL DEFAULT '0',
      PRIMARY KEY  (id),
      KEY name (name),
      KEY other_key (other))
    ENGINE=FEDERATED
    DEFAULT CHARSET=latin1
    CONNECTION='mysql://root@127.0.0.1:9306/federated/test_table';
    ./configure --with-federatedx-storage-engine \
      --prefix=/home/mysql/mysql-build/federatedx/ --with-debug
    --prefix=/home/code-dev/maria
    /usr/local/mysql/bin/mysqld_safe \
      --user=mysql --log=/tmp/mysqld.5555.log -P 5555
    gdb ./mysqld
    (gdb) run --gdb --port=5554 --socket=/tmp/mysqld.5554 --skip-innodb --debug
    SHOW VARIABLES LIKE '%federat%';
    SHOW storage engines;
    CREATE TABLE federated_test_table ENGINE=FEDERATED 
      CONNECTION='mysql://patg@192.168.1.123/first_db/test_table';
    CREATE SERVER 'server_one' FOREIGN DATA WRAPPER 'mysql' OPTIONS
      (HOST '192.168.1.123',    
      DATABASE 'first_db',    
      USER 'patg',
      PASSWORD '',
      PORT 3306,
      SOCKET '',    
      OWNER 'root');
    CREATE TABLE federated_test_table ENGINE=FEDERATED 
      CONNECTION='server_one/test_table';
    CREATE TABLE essai (
      num INTEGER(4) NOT NULL,
      line CHAR(15) NOT NULL)
    ENGINE=CONNECT table_type=MYSQL
    CONNECTION='mysql://root@localhost/test/people';
    scheme://username:password@hostname:port/database/tablename
    scheme://username@hostname/database/tablename
    scheme://username:password@hostname/database/tablename
    scheme://username:password@hostname/database/tablename
    CREATE TABLE essai (
      num INTEGER(4) NOT NULL,
      line CHAR(15) NOT NULL)
    ENGINE=CONNECT table_type=MYSQL dbname=test tabname=people
    CONNECTION='mysql://root@localhost';
    connection="connection_one"
    connection="connection_one/table_foo"
    innodb_safe_truncate
    datadir
    OCCUR
    MONGO
    ODBC
    JDBC
    MYSQL
    ODBC
    JDBC
    MYSQL
    catalog
    PROXY
    Inward table
    CSV
    VEC
    Connect 1.06.0010
    MYSQL
    OEM
    CSV
    ODBC
    JDBC
    CSV
    CSV
    XCOL
    JSON tables
    VEC
    ODBC
    JDBC
    PIVOT
    OEM
    type
    DOS
    FIX
    BIN
    CSV
    FMT
    XML
    JSON
    INI
    DBF
    VEC
    ODBC
    JDBC
    MYSQL
    TBL
    PROXY
    XCOL
    OCCUR
    PIVOT
    ZIP
    VIR
    DIR
    WMI
    MAC
    OEM
    DOS
    MYSQL
    PROXY
    ODBC
    JDBC
    MYSQL
    PROXY
    catalog tables
    XML
    Connect 1.06.0010

    Password

    No password

    An optional user password.

    Port

    The currently used port

    The port of the server.

    Quoted

    0

    1 if remote Tabname must be quoted.

    insert into try(msg) values('One'),(NULL),('Three')

    1

    3

    Affected rows

    Warning

    0

    1048

    Column 'msg' cannot be null

    insert into try values(2,'Deux') on duplicate key...

    0

    2

    Affected rows

    insert into try(msge) values('Four'),('Five'),('Six')

    0

    1054

    Unknown column 'msge' in 'field list'

    insert into try(id) values(NULL)

    1

    1

    Affected rows

    Warning

    0

    1364

    Field 'msg' doesn't have a default value

    update try set msg = 'Four' where id = 4

    0

    1

    Affected rows

    select * from try

    0

    2

    Result set columns

    ENUM

  • SET

  • Geometry types

  • When doing tests. For instance to check a connection string.

  • Table

    The table name

    The name of the table to access.

    Database

    The current DB name

    The database where the table is located.

    Host

    localhost*

    The host of the server, a name or an IP address.

    User

    The current user

    Flag=0:

    The command to execute (the default)

    Flag=1:

    The number of affected rows, or the result number of columns if the command would return a result set.

    Flag=2:

    The returned (eventually error) message.

    Flag=3:

    The number of warnings.

    CREATE TABLE people (num integer(4) primary key aut...

    0

    0

    Affected rows

    Warning

    To get warnings

    Note

    To get notes

    Error

    To get errors returned as warnings (?)

    drop table if exists try

    1

    0

    Affected rows

    Note

    0

    1051

    Unknown table 'try'

    create table try (id int key auto_increment, msg...

    0

    0

    Data type conversion
    Data type conversion
    character_set_results
    SELECT
    SELECT
    INSERT
    UPDATE
    DELETE
    INSERT
    UPDATE
    DELETE
    UPDATE
    DELETE
    BIT
    BINARY
    TINYBLOB
    BLOB
    MEDIUMBLOB
    LONGBLOB
    TINYTEXT
    MEDIUMTEXT
    LONGTEXT
    TEXT
    connect_type_conv
    connect_conv_size
    REPLACE INTO
    INSERT ... ON DUPLICATE KEY UPDATE
    FEDERATED(X)
    TBL
    XCOL
    OCCUR
    PIVOT
    TBL
    Using the TBL and MYSQL types together

    The connection user name.

    Affected rows

    CREATE TABLE essai (
      num INTEGER(4) NOT NULL,
      line CHAR(15) NOT NULL)
    ENGINE=CONNECT table_type=MYSQL dbname=test tabname=people
    option_list='user=root,host=localhost';
    CREATE TABLE grp ENGINE=CONNECT table_type=mysql
    CONNECTION='mysql://root@localhost/test/people'
    SRCDEF='select title, count(*) as cnt from employees group by title';
    CREATE TABLE essai ENGINE=CONNECT table_type=MYSQL
    CONNECTION='mysql://root@localhost/test/people';
    SELECT * FROM essai WHERE num = 23;
    SELECT num, line FROM people WHERE num = 23
    SELECT d.id, d.name, f.dept, f.salary
    FROM loc_tab d STRAIGHT_JOIN cnc_tab f ON d.id = f.id
    WHERE f.salary > 10000;
    UPDATE essai AS x SET line = (SELECT msg FROM t1 WHERE id = x.num)
    WHERE num = 2;
    CREATE TABLE t1 (id INT, msg CHAR(1)) ENGINE=BLACKHOLE;
      number int(5) not null flag=1,
    CREATE TABLE send (
      command VARCHAR(128) NOT NULL,
      warnings INT(4) NOT NULL flag=3,
      message VARCHAR(255) flag=2)
    ENGINE=CONNECT table_type=mysql
    CONNECTION='mysql://user@host/database'
    option_list='Execsrc=1,Maxerr=2';
    SELECT * FROM send WHERE command = 'a command';
    SELECT * FROM send WHERE command =
    'CREATE TABLE people (
    num integer(4) primary key autoincrement,
    line char(15) not null';
    SELECT * FROM send WHERE command IN (
    "update people set line = 'Two' where id = 2",
    "update people set line = 'Three' where id = 3");
    SELECT * FROM send WHERE command IN ('Warning','Note',
    'drop table if exists try',
    'create table try (id int key auto_increment, msg varchar(32) not
    null) engine=aria',
    "insert into try(msg) values('One'),(NULL),('Three') ",
    "insert into try values(2,'Deux') on duplicate key update msg =
    'Two'",
    "insert into try(message) values('Four'),('Five'),('Six')",
    'insert into try(id) values(NULL)',
    "update try set msg = 'Four' where id = 4",
    'select * from try');
    soft - The service thread will wait the specified time and then sync() to the log. If the interval is 0 then it won't wait for any commits (this is dangerous and should generally not be used in production)
    MariaDB 10.6.13
    MariaDB 10.11.3
    MariaDB Community Server 12.1

    For a full list of server options, system variables and status variables, .

    File partitioning. Each partition is stored in a separate file like in multiple tables.

  • Table partitioning. Each partition is stored in a separate table like in TBL tables.

  • Partition engine issues

    Using partitions sometimes requires creating the tables in an unnatural way to avoid some error due to several partition engine bugs:

    1. Engine specific column and index options are not recognized and cause a syntax error when the table is created. The workaround is to create the table in two steps, a CREATE TABLE statement followed by an ALTER TABLE statement.

    2. The connection string, when specified for the table, is lost by the partition engine. The workaround is to specify the connection string in the option_list.

    3. MySQL upstream bug #71095. In case of list columns partitioning it sometimes causes a false “impossible where” clause to be raised. This makes a wrong void result returned when it should not be void. There is no workaround but this bug should be hopefully fixed.

    The following examples are using the above workaround syntax to address these issues.

    File Partitioning

    File partitioning applies to file-based CONNECT table types. As with multiple tables, physical data is stored in several files instead of just one. The differences to multiple tables are:

    1. Data is distributed amongst the different files following the partition rule.

    2. Unlike multiple tables, partitioned tables are not read only.

    3. Unlike multiple tables, partitioned tables can be indexable.

    4. The file names are generated from the partition names.

    5. Query pruning is automatically made by the partition engine.

    The table file names are generated differently depending on whether the table is an inward or outward table. For inward tables, for which the file name is not specified, the partition file names are:

    For instance for the table:

    CONNECT will generate in the current data directory the files:

    This is similar to what the partition engine does for other engines - CONNECT partitioned inward tables behave like other engines partition tables do. Just the data format is different.

    Note: If sub-partitioning is used, inward table files and index files are named:

    Outward Tables

    The real problems occur with outward tables, in particular when they are created from already existing files. The first issue is to make the partition table use the correct existing file names. The second one, only for already existing not void tables, is to be sure the partitioning function match the distribution of the data already existing in the files.

    The first issue is addressed by the way data file names are constructed. For instance let us suppose we want to make a table from the fixed formatted files:

    This can be done by creating a table such as:

    The rule is that for each partition the matching file name is internally generated by replacing in the given FILE _ NAME option value the “%s” part by the partition name.

    If the table was initially void, further inserts will populate it according to the partition function. However, if the files did exist and contained data, this is your responsibility to determine what partition function actually matches the data distribution in them. This means in particular that partitioning by key or by hash cannot be used (except in exceptional cases) because you have almost no control over what the used algorithm does.

    In the example above, there is no problem if the table is initially void, but if it is not, serious problems can be met if the initial distribution does not match the table distribution. Supposing a row in which “id” as the value 12 was initially contained in the part1.txt file, it are seen when selecting the whole table but if you ask:

    The result will have 0 rows. This is because according to the partition function query pruning will only look inside the second partition and will miss the row that is in the wrong partition.

    One way to check for wrong distribution if for instance to compare the results from queries such as:

    And

    If they match, the distribution can be correct although this does not prove it. However, if they do not match, the distribution is surely wrong.

    Partitioning on a Special Column

    There are some cases where the files of a multiple table do not contain columns that can be used for range or list partitioning. For instance, let’s suppose we have a multiple table based on the following files:

    Each of them containing the same kind of data:

    A multiple table can be created on them, for instance by:

    The issue is that if we want to create a partitioned table on these files, there are no columns to use for defining a partition function. Each city file can have the same kind of column values and there is no way to distinguish them.

    However, there is a solution. It is to add to the table a special column that are used by the partition function. For instance, the new table creation can be done by:

    Note 1: we had to do it in two steps because of the column CONNECT options.

    Note 2: the special column PARTID returns the name of the partition in which the row is located.

    Note 3: here we could have used the FNAME special column instead because the file name is specified as being the partition name.

    This may seem rather stupid because it means for instance that a row are in partition boston if it belongs to the partition boston! However, it works because the partition engine doesn’t know about special columns and behaves as if the city column was a real column.

    What happens if we populate it by?

    The value given for the city column (explicitly or by default) are used by the partition engine to decide in which partition to insert the rows. It are ignored by CONNECT (a special column cannot be given a value) but later will return the matching value. For instance:

    This query returns:

    city
    first_name
    job

    boston

    Johnny

    RESEARCH

    chicago

    Jim

    SALES

    Everything works as if the city column was a real column contained in the table data files.

    Partitioning of zipped tables

    Two cases are currently supported: If a table is based on several zipped files, portioning is done the standard way as above. This is the file_name option specifying the name of the zip files that shall contain the ‘%s’ part used to generate the file names. If a table is based on only one zip file containing several entries, this is indicated by placing the ‘%s’ part in the entry option value. Note: If a table is based on several zipped files each containing several entries, only the first case is possible. Using sub-partitioning to make partitions on each entries is not supported yet.

    Table Partitioning

    With table partitioning, each partition is physically represented by a sub-table. Compared to standard partitioning, this brings the following features:

    1. The partitions can be tables driven by different engines. This relieves the current existing limitation of the partition engine.

    2. The partitions can be tables driven by engines not currently supporting partitioning.

    3. Partition tables can be located on remote servers, enabling table sharding.

    4. Like for TBL tables, the columns of the partition table do not necessarily match the columns of the sub-tables.

    The way it is done is to create the partition table with a table type referring to other tables, PROXY,MYSQL ODBC or JDBC. Let us see how this is done on a simple example. Supposing we have created the following tables:

    We can for instance create a partition table using these tables as physical partitions by:

    Here the name of each partition sub-table are made by replacing the ‘%s’ part of the tabname option value by the partition name. Now if we do:

    The rows are distributed in the different sub-tables according to the partition function. This can be seen by executing the query:

    This query replies:

    partition_name
    table_rows

    1

    4

    2

    4

    3

    3

    Query pruning is of course automatic, for instance:

    This query replies:

    id
    select_type
    table
    partitions
    type
    possible_keys
    key
    key_len
    ref
    rows
    Extra

    1

    SIMPLE

    part5

    3

    When executing this select query, only sub-table xt3 are used.

    Indexing with Table Partitioning

    Using the PROXY table type seems natural. However, in this current version, the issue is that PROXY (and ODBC) tables are not indexable. This is why, if you want the table to be indexed, you must use the MYSQL table type. The CREATE TABLE statement are almost the same:

    The column id is declared as a key, and the table type is now MYSQL. This makes Sub-tables accessed by calling a MariaDB server as MYSQL tables do. Note that this modifies only the way CONNECT sub-tables are accessed.

    However, indexing just make the partitioned table use “remote indexing” the way FEDERATED tables do. This means that when sending the query to retrieve the table data, a where clause are added to the query. For instance, let’s suppose you ask:

    The query sent to the server are:

    On a query like this one, it does not change much because the where clause could have been added anyway by the cond_push function, but it does make a difference in case of joins. The main thing to understand is that real indexing is done by the called table and therefore that it should be indexed.

    This also means that the xt1, xt2, and xt3 table indexes should be made separately because creating the t2 table as indexed does not make the indexes on the sub-tables.

    Sharding with Table Partitioning

    Using table partitioning can have one more advantage. Because the sub-tables can address a table located on another server, it is possible to shard a table on separate servers and hardware machines. This may be required to access as one table data already located on several remote machines, such as servers of a company branches. Or it can be just used to split a huge table for performance reason. For instance, supposing we have created the following tables:

    Creating the partition table accessing all these are almost like what we did with the t4 table:

    .

    The only difference is the tabname option now referring to the rt1, rt2, and rt3 tables. However, even if it works, this is not the best way to do it. This is because accessing a table via the MySQL API is done twice per table. Once by CONNECT to access the FEDERATED table on the local server, then a second time by FEDERATED engine to access the remote table.

    The CONNECT MYSQL table type being used anyway, you’d rather use it to directly access the remote tables. Indeed, the partition names can also be used to modify the connection URL’s. For instance, in the case shown above, the partition table can be created as:

    Several things can be noted here:

    1. As we have seen before, the partition engine currently loses the connection string. This is why it was specified as “connect” in the option list.

    2. For each partition sub-tables, the “%s” part of the connection string has been replaced by the partition name.

    3. It is not needed anymore to define the rt1, rt2, and rt3 tables (even it does not harm) and the FEDERATED engine is no more used to access the remote tables.

    This is a simple case where the connection string is almost the same for all the sub-tables. But what if the sub-tables are accessed by very different connection strings? For instance:

    There are two solutions. The first one is to use the parts of the connection string to differentiate as partition names:

    The second one, allowing avoiding too complicated partition names, is to create federated servers to access the remote tables (if they do not already exist, else just use them). For instance the first one could be:

    Similarly, “server_two” and “server_three” would be created and the final partition table would be created as:

    It would be even simpler if all remote tables had the same name on the remote databases, for instance if they all were named xt1, the connection string could be set as “server_%s/xt1” and the partition names would be just “one”, “two”, and “three”.

    Sharding on a Special Column

    The technique we have seen above with file partitioning is also available with table partitioning. Companies willing to use as one table data sharded on the company branch servers can, as we have seen, add to the table create definition a special column. For instance:

    This example assumes that federated servers had been created named “server_main”, “server_east” and “server_west” and that all remote tables are named “sales”. Note also that in this example, the column id is no more a key.

    Current Partition Limitations

    Because the partition engine was written before some other engines were added to MariaDB, the way it works is sometime incompatible with these engines, in particular with CONNECT.

    Update statement

    With the sample tables above, you can do update statements such as:

    It works perfectly and is accepted by CONNECT. However, let us consider the statement:

    This statement is not accepted by CONNECT. The reason is that the column id being part of the partition function, changing its value may require the modified row to be moved to another partition. The way it is done by the partition engine is to delete the old row and to re-insert the new modified one. However, this is done in a way that is not currently compatible with CONNECT (remember that CONNECT supports UPDATE in a specific way, in particular for the table type MYSQL) This limitation could be temporary. Meanwhile the workaround is to manually do what is done above,

    Deleting the row to modify and inserting the modified row:

    Alter Table statement

    For all CONNECT outward tables, the ALTER TABLE statement does not make any change in the table data. This is why ALTER TABLE should not be used; in particular to modify the partition definition, except of course to correct a wrong definition. Note that using ALTER TABLE to create a partition table in two steps because column options would be lost is valid as it applies to a table that is not yet partitioned.

    As we have seen, it is also safe to use it to create or drop indexes. Otherwise, a simple rule of thumb is to avoid altering a table definition and better drop and re-create a table whose definition must be modified. Just remember that for outward CONNECT tables, dropping a table does not erase the data and that creating it does not modify existing data.

    Rowid special column

    Each partition being handled separately as one table, the ROWID special column returns the rank of the row in its partition, not in the whole table. This means that for partition tables ROWID and ROWNUM are equivalent.

    This page is licensed: CC BY-SA / Gnu FDL

    MyISAM
    InnoDB

    6

    The last write access date

    7

    The last read access date

    8

    The file creation date

    Vendor

    Acer

    Version

    Aspire 8920

    5

    Scope ID

    varchar(256)

    6

    Routing

    int(1)

    7

    Proxy

    int(1)

    8

    DNS

    int(1)

    10

    Name

    varchar(260)

    11

    Description

    varchar(132)

    12

    MAC address

    char(24)

    13

    Type

    int(3)

    14

    DHCP

    int(1)

    15

    IP address

    char(16)

    16

    SUBNET mask

    char(16)

    17

    GATEWAY

    char(16)

    18

    DHCP server

    char(16)

    19

    Have WINS

    int(1)

    20

    Primary WINS

    char(16)

    21

    Secondary WINS

    char(16)

    22

    Lease obtained

    datetime

    23

    Lease expires

    datetime

    MDEV-6817
    GEOMETRY

    CONNECT JDBC Table Type: Accessing Tables from Another DBMS

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    The JDBC table type should be distributed with all recent versions of MariaDB. However, if the automatic compilation of it is possible after the java JDK was installed, the complete distribution of it is not fully implemented in older versions. The distributed JdbcInterface.jar file contains the JdbcInterface wrapper only. New versions distribute a JavaWrappers.jar that contains all currently existing wrappers.

    This will require that:

    1. The Java SDK is installed on your system.

    2. The java wrapper class files are available on your system.

    3. And of course, some JDBC drivers exist to be used with the matching DBMS.

    Point 2 was made automatic in the newest versions of MariaDB.

    Compiling From Source Distribution

    Even when the Java JDK has been installed, CMake sometimes cannot find the location where it stands. For instance on Linux the Oracle Java JDK package might be installed in a path not known by the CMake lookup functions causing error message such as:

    When this happen, provide a Java prefix as a hint on where the package was loaded. For instance on Ubuntu I was obliged to enter:

    After that, the compilation of the CONNECT JDBC type was completed successfully.

    Compiling the Java source files

    They are the source of the java wrapper classes used to access JDBC drivers. In the source distribution, they are located in the CONNECT source directory.

    The default wrapper, JdbcInterface, is the only one distributed with binary distribution. It uses the standard way to get a connection to the drivers via the DriverManager.getConnection method. Other wrappers, only available with source distribution, enable connection to a Data Source, eventually implementing pooling. However, they must be compiled and installed manually.

    The available wrappers are:

    Wrapper
    Description

    The wrapper used by default is specified by the session variable and is initially set to wrappers/JdbcInterface. The wrapper to use for a table can also be specified in the option list as a wrapper option of the “create table” statements.

    Note: Conforming java naming usage, class names are preceded by the java package name with a slash separator. However, this is not mandatory for CONNECT which adds the package name if it is missing.

    The JdbcInterface wrapper is always usable when Java is present on your machine. Binary distributions have this wrapper already compiled as a JdbcInterface.jar file installed in the plugin directory whose path is automatically included in the class path of the JVM. Recent versions also add a JavaWrappers.jar that contains all these wrappers. Therefore there is no need to worry about its path.

    Compiling the ApacheInterface wrapper requires that the Apache common-DBCP2 package be installed. Other wrappers are to be used only with the matching JDBC drivers that must be available when compiling them.

    Installing the jar file in the plugin directory is the best place because it is part of the class path. Depending on what is installed on your system, the source files can be reduced accordingly. To compile only the JdbcInterface.java file the CMAKE_JAVA_INCLUDE_PATH is not required. Here the paths are the ones existing on my Windows 7 machine and should be localized.

    Setting the Required Information

    Before any operation with a JDBC driver can be made, CONNECT must initialize the environment that will make working with Java possible. This will consist of:

    1. Loading dynamically the JVM library module.

    2. Creating the Java Virtual Machine.

    3. Establishing contact with the java wrapper class.

    4. Connecting to the used JDBC driver.

    Indeed, the JVM library module is not statically linked to the CONNECT plugin. This is to make it possible to use a CONNECT plugin that has been compiled with the JDBC table type on a machine where the Java SDK is not installed. Otherwise, users not interested in the JDBC table type would be obliged to install the Java SDK on their machine to be able to load the CONNECT storage engine.

    JVM Library Location

    If the JVM library (jvm.dll on Windows, libjvm.so on Linux) was not placed in the standard library load path, CONNECT cannot find it and must be told where to search for it. This happens in particular on Linux when the Oracle Javapackage was installed in a private location.

    If the JAVA_HOME variable was exported as explained above, CONNECT can sometimes find it using this information. Otherwise, its search path can be added to the LD_LIBRARY_PATH environment variable. But all this is complicated because making environment variables permanent on Linux is painful (many different methods must be used depending on the Linux version and the used shell).

    This is why CONNECT introduced a new global variable connect_jvm_path to store this information. It can be set when starting the server as a command line option or even afterwards before the first use of the JDBC table type:

    or

    The client library is smaller and faster for connection. The server library is more optimized and can be used in case of heavy load usage.

    Note that this may not be required on Windows because the path to the JVM library can sometimes be found in the registry.

    Once this library is loaded, CONNECT can create the required Java Virtual Machine.

    Java Class Path

    This is the list of paths Java searches when loading classes. With CONNECT, the classes to load are the java wrapper classes used to communicate with the drivers , and the used JDBC driver classes that are grouped inside jar files. If the ApacheInterface wrapper must be used, the class path must also include all three jars used by the Apache package.

    Caution: This class path is passed as a parameter to the Java Virtual Machine (JVM) when creating it and cannot be modified as it is a read only property. In addition, because MariaDB is a multi-threading application, this JVM cannot be destroyed and are used throughout the entire life of the MariaDB server. Therefore, be sure it is correctly set before you use the JDBC table type for the first time. Otherwise, there are practically no alternative than to shut down the server and restart it.

    The path to the wrapper classes must point to the directory containing the wrappers sub-directory. If a JdbcInterface.jar file was made, its path is the directory where it is located followed by the jar file name. It is unclear where because this will depend on the installation process. If you start from a source distribution, it can be in the storage/connect directory where the CONNECT source files are or where you moved them or compiled the JdbcInterface.jar file.

    For binary distributions, there is nothing to do because the jar file has been installed in the mysql share directory whose path is always automatically included in the class path available to the JVM.

    Remaining are the paths of all the installed JDBC drivers that you intend to use. Remember that their path must include the jar file itself. Some applications use an environment variable CLASSPATH to contain them. Paths are separated by ‘:’ on Linux and by ‘;’ on Windows.

    If the CLASSPATH variable actually exists and if it is available inside MariaDB, so far so good. You can check this using an UDF function provided by CONNECT that returns environment variable values:

    Most of the time, this will return null or some required files are missing. This is why CONNECT introduced a global variable to store this information. The paths specified in this variable are added and have precedence to the ones, if any, of the CLASSPATH environment variable. As for the jvm path, this variable connect_class_path should be specified when starting the server but can also be set before using the JDBC table type for the first time.

    The current directory (sql/data) is also placed by CONNECT at the beginning of the class path.

    As an example, here is how I start MariaDB when doing tests on Linux:

    CONNECT JDBC Tables

    These tables are given the type JDBC. For instance, supposing you want to access the boys table located on and external local or remote database management system providing a JDBC connector:

    To access this table via JDBC you can create a table such as:

    The CONNECTION option is the URL used to establish the connection with the remote server. Its syntax depends on the external DBMS and in this example is the one used to connect as root to a MySQL or MariaDB local database using the MySQL JDBC connector.

    As for ODBC, the columns definition can be omitted and are retrieved by the discovery process. The restrictions concerning column definitions are the same as for ODBC.

    Note: The dbname indicated in the URL corresponds for many DBMS to the catalog information. For MySQL and MariaDB it is the schema (often called database) of the connection.

    Using a Federated Server

    Alternatively, a JDBC table can specify its connection options via a Federated server. For instance, supposing you have a table accessing an external Postgresql table defined as:

    You can create a Federated server:

    Now the JDBC table can be created by:

    or by:

    In any case, the location of the remote table can be changed in the Federated server without having to alter all the tables using this server.

    JDBC needs a URL to establish a connection. CONNECT was able to construct that URL from the information contained in such Federated server definition when the URL syntax is similar to the one of MySQL, MariaDB or Postgresql. However, other DBMSs such as Oracle use a different URL syntax. In this case, simply replace the HOST information by the required URL in the Federated server definition. For instance:

    Now you can create an Oracle table with something like this:

    Note: Oracle, as Postgresql, does not seem to understand the DATABASE setting as the table schema that must be specified in the Create Table statement.

    Connecting to a JDBC driver

    When the connection to the driver is established by the JdbcInterface wrapper class, it uses the options that are provided when creating the CONNECT JDBC tables. Inside the default Java wrapper, the driver’s main class is loaded by the DriverManager.getConnection function that takes three arguments:

    URL
    User
    Password

    The URL varies depending on the connected DBMS. Refer to the documentation of the specific JDBC driver for a description of the syntax to use. User and password can also be specified in the option list.

    Beware that the database name in the URL can be interpreted differently depending on the DBMS. For MySQL this is the schema in which the tables are found. However, for Postgresql, this is the catalog and the schema must be specified using the CONNECT dbname option.

    For instance a table accessing a Postgresql table via JDBC can be created with a create statement such as:

    Note: In previous versions of JDBC, to obtain a connection, java first had to initialize the JDBC driver by calling the method Class.forName. In this case, see the documentation of your DBMS driver to obtain the name of the class that implements the interface java.sql.Driver. This name can be specified as an option DRIVER to be put in the option list. However, most modern JDBC drivers since version 4 are self-loading and do not require this option to be specified.

    The wrapper class also creates some required items and, in particular, a statement class. Some characteristics of this statement will depend on the options specified when creating the table:

    Scrollable
    Block_size

    Fetch Size

    The fetch size determines the number of rows that are internally retrieved by the driver on each interaction with the DBMS. Its default value depends on the JDBC driver. It is equal to 10 for some drivers but not for the MySQL or MariaDB connectors.

    The MySQL/MariaDB connectors retrieve all the rows returned by one query and keep them in a memory cache. This is generally fine in most cases, but not when retrieving a large result set that can make the query fail with a memory exhausted exception.

    To avoid this, when accessing a big table and expecting large result sets, you should specify the BLOCK_SIZE option to 1 (the only acceptable value). However a problem remains:

    Suppose you execute a query such as:

    Not knowing the limit clause, CONNECT sends to the remote DBMS the query:

    In this query big can be a huge table having million rows. Having correctly specified the block size as 1 when creating the table, the wrapper just reads the 10 first rows and stops. However, when closing the statement, these MySQL/MariaDB drivers must still retrieve all the rows returned by the query. This is why, the wrapper class when closing the statement also cancels the query to stop that extra reading.

    The bad news is that if it works all right for some previous versions of the MySQL driver, it does not work for new versions as well as for the MariaDB driver that apparently ignores the cancel command. The good news is that you can use an old MySQL driver to access MariaDB databases. It is also possible that this bug are fixed in future versions of the drivers.

    Connection to a Data Source

    This is the java preferred way to establish a connection because a data source can keep a pool of connections that can be re-used when necessary. This makes establishing connections much faster once it was done for the first time.

    CONNECT provide additional wrappers whose files are located in the CONNECT source directory. The wrapper to use can be specified in the global variable connect_java_wrapper, which defaults to “JdbcInterface”.

    It can also be specified for a table in the option list by setting the option wrapper to its name. For instance:

    They can be used instead of the standard JdbcInterface and are using created data sources.

    The Apache one uses data sources implemented by the Apache-commons-dbcp2 package and can be used with all drivers including those not implementing data sources. However, the Apache package must be installed and its three required jar files accessible via the class path.

    1. commons-dbcp2-2.1.1.jar

    2. commons-pool2-2.4.2.jar

    3. commons-logging-1.2.jar

    Note: the versions numbers can be different on your installation.

    The other ones use data sources provided by the matching JDBC driver. There are currently four wrappers to be used with mysql-6.0.2, mariadb, oracle and postgresql.

    Unlike the class path, the used wrapper can be changed even after the JVM machine was created.

    Random Access to JDBC Tables

    The same methods described for ODBC tables can be used with JDBC tables.

    Note that in the case of the MySQL or MariaDB connectors, because they internally read the whole result set in memory, using the MEMORY option would be a waste of memory. It is much better to specify the use of a scrollable cursor when needed.

    Other Operations with JDBC Tables

    Except for the way the connection string is specified and the table type set to JDBC, all operations with ODBC tables are done for JDBC tables the same way. Refer to the ODBC chapter to know about:

    • Accessing specified views (SRCDEF)

    • Data modifying operations.

    • Sending commands to a data source.

    • JDBC catalog information.

    Note: Some JDBC drivers fail when the global time_zone variable is ambiguous, which sometimes happens when it is set to SYSTEM. If so, reset it to a not ambiguous value, for instance:

    JDBC Specific Restrictions

    Connecting via data sources created externally (for instance using Tomcat) is not supported yet.

    Other restrictions are the same as for the ODBC table type.

    Handling the UUID Data Type

    PostgreSQL has a native UUID data type, internally stored as BIN(16). This is neither an SQL nor a MariaDB data type. The best we can do is to handle it by its character representation.

    UUID are translated to CHAR(36) when column definitions are set using discovery. Locally a PostgreSQL UUID column are handled like a CHAR or VARCHAR column. Example:

    Using the PostgreSQL table testuuid in the text database:

    Its column definitions can be queried by:

    This query returns:

    Table
    Column
    Type
    Name
    Size

    Note: PostgreSQL, when a column size is undefined, returns 2147483647, which is not acceptable for MariaDB. CONNECT change it to the value of the connect_conv_size session variable. Also, for TEXT columns the data type returned is 12 (SQL_VARCHAR) instead of -1 the SQL_TEXT value.

    Accessing this table via JDBC by:

    it are created by discovery as:

    Note: 8192 being here the connect_conv_size value.

    Let's populate it:

    Result:

    id
    msg

    Here the id column values come from the DEFAULT of the PostgreSQL column that was specified as uuid_generate_v4().

    It can be set from MariaDB. For instance:

    Result:

    id
    msg

    The first insert specifies a valid UUID character representation. The second one set it to NULL. The third one (a void string) generates a Java random UUID. UPDATE commands obey the same specification.

    These commands both work:

    However, this one fails:

    Returning:

    1296: Got error 174 'ExecuteQuery: org.postgresql.util.PSQLException: ERROR: operator does not exist: uuid ~ unknown hint: no operator corresponds to the data name and to the argument types.

    because CONNECT cond_push feature added the WHERE clause to the query sent to PostgreSQL:

    and the LIKE operator does not apply to UUID in PostgreSQL.

    To handle this, a new session variable was added to CONNECT: connect_cond_push. It permits to specify if cond_push is enabled or not for CONNECT and defaults to 1 (enabled). In this case, you can execute:

    Doing so, the where clause are executed by MariaDB only and the query will not fail anymore.

    Executing the JDBC tests

    Four tests exist but they are disabled because requiring some work to localized them according to the operating system and available java package and JDBC drivers and DBMS.

    Two of them, jdbc.test and jdbc_new.test, are accessing MariaDB via JDBC drivers that are contained in a fat jar file that is part of the test. They should be executable without anything to do on Windows; simply adding the option –enable-disabled when running the tests.

    However, on Linux these tests can fail to locate the JVM library. Before executing them, you should export the JAVA_HOME environment variable set to the prefix of the java installation or export the LD_LIBRARY_PATH containing the path to the JVM lib.

    Fixing Problem With mariadb-dump

    In some case or some platform, when CONNECT is set up for use with JDBC table types, this causes with the option --all-databases to fail.

    This was reported by Robert Dyas who found the cause - see the discussion at .

    This page is licensed: CC BY-SA / Gnu FDL

    Data file name: table_name#P#partition_name.table_file_type
    Index file name: table_name#P#partition_name.index_file_type
    CREATE TABLE t1 (
    id INT KEY NOT NULL,
    msg VARCHAR(32))
    ENGINE=CONNECT TABLE_TYPE=FIX
    PARTITION BY RANGE(id) (
    PARTITION first VALUES LESS THAN(10),
    PARTITION middle VALUES LESS THAN(50),
    PARTITION last VALUES LESS THAN(MAXVALUE));
    | t1#P#first.fix
    | t1#P#first.fnx
    | t1#P#middle.fix
    | t1#P#middle.fnx
    | t1#P#last.fix
    | t1#P#last.fnx
    | table_name#P#partition_name#SP#subpartition_name.type
    | table_name#P#partition_name#SP#subpartition_name.index_type
    E:\Data\part1.txt
    E:\Data\part2.txt
    E:\Data\part3.txt
    CREATE TABLE t2 (
    id INT NOT NULL,
    msg VARCHAR(32),
    INDEX XID(id))
    ENGINE=connect table_type=FIX file_name='E:/Data/part%s.txt'
    PARTITION BY RANGE(id) (
    PARTITION `1` VALUES LESS THAN(10),
    PARTITION `2` VALUES LESS THAN(50),
    PARTITION `3` VALUES LESS THAN(MAXVALUE));
    SELECT * FROM t2 WHERE id = 12;
    SELECT partition_name, table_rows FROM
    information_schema.partitions WHERE table_name = 't2';
    SELECT CASE WHEN id < 10 THEN 1 WHEN id < 50 THEN 2 ELSE 3 END
    AS pn, COUNT(*) FROM part3 GROUP BY pn;
    tmp/boston.txt
    tmp/chicago.txt
    tmp/atlanta.txt
    ID: int
    First_name: varchar(16)
    Last_name: varchar(30)
    Birth: date
    Hired: date
    Job: char(10)
    Salary: double(8,2)
    CREATE TABLE mulemp (
    id INT NOT NULL,
    first_name VARCHAR(16) NOT NULL,
    last_name VARCHAR(30) NOT NULL,
    birth DATE NOT NULL date_format='DD/MM/YYYY',
    hired DATE NOT NULL date_format='DD/MM/YYYY',
    job CHAR(10) NOT NULL,
    salary DOUBLE(8,2) NOT NULL
    ) ENGINE=CONNECT table_type=FIX file_name='tmp/*.txt' multiple=1;
    CREATE TABLE partemp (
    id INT NOT NULL,
    first_name VARCHAR(16) NOT NULL,
    last_name VARCHAR(30) NOT NULL,
    birth DATE NOT NULL date_format='DD/MM/YYYY',
    hired DATE NOT NULL date_format='DD/MM/YYYY',
    job CHAR(16) NOT NULL,
    salary DOUBLE(10,2) NOT NULL,
    city CHAR(12) DEFAULT 'boston' special=PARTID,
    INDEX XID(id)
    ) ENGINE=CONNECT table_type=FIX file_name='E:/Data/Test/%s.txt';
    ALTER TABLE partemp
    PARTITION BY LIST COLUMNS(city) (
    PARTITION `atlanta` VALUES IN('atlanta'),
    PARTITION `boston` VALUES IN('boston'),
    PARTITION `chicago` VALUES IN('chicago'));
    INSERT INTO partemp(id,first_name,last_name,birth,hired,job,salary) VALUES
    (1205,'Harry','Cover','1982-10-07','2010-09-21','MANAGEMENT',125000.00);
    INSERT INTO partemp VALUES
    (1524,'Jim','Beams','1985-06-18','2012-07-25','SALES',52000.00,'chicago'),
    (1431,'Johnny','Walker','1988-03-12','2012-08-09','RESEARCH',46521.87,'boston'),
    (1864,'Jack','Daniels','1991-12-01','2013-02-16','DEVELOPMENT',63540.50,'atlanta');
    SELECT city, first_name, job FROM partemp WHERE id IN (1524,1431);
    CREATE TABLE xt1 (
    id INT NOT NULL,
    msg VARCHAR(32))
    ENGINE=myisam;
    
    CREATE TABLE xt2 (
    id INT NOT NULL,
    msg VARCHAR(32)); /* engine=innoDB */
    
    CREATE TABLE xt3 (
    id INT NOT NULL,
    msg VARCHAR(32))
    ENGINE=connect table_type=CSV;
    CREATE TABLE t3 (
    id INT NOT NULL,
    msg VARCHAR(32))
    ENGINE=connect table_type=PROXY tabname='xt%s'
    PARTITION BY RANGE COLUMNS(id) (
    PARTITION `1` VALUES LESS THAN(10),
    PARTITION `2` VALUES LESS THAN(50),
    PARTITION `3` VALUES LESS THAN(MAXVALUE));
    INSERT INTO t3 VALUES
    (4, 'four'),(7,'seven'),(10,'ten'),(40,'forty'),
    (60,'sixty'),(81,'eighty one'),(72,'seventy two'),
    (11,'eleven'),(1,'one'),(35,'thirty five'),(8,'eight');
    SELECT partition_name, table_rows FROM
    information_schema.partitions WHERE table_name = 't3';
    EXPLAIN PARTITIONS SELECT * FROM t3 WHERE id = 81;
    CREATE TABLE t4 (
    id INT KEY NOT NULL,
    msg VARCHAR(32))
    ENGINE=connect table_type=MYSQL tabname='xt%s'
    PARTITION BY RANGE COLUMNS(id) (
    PARTITION `1` VALUES LESS THAN(10),
    PARTITION `2` VALUES LESS THAN(50),
    PARTITION `3` VALUES LESS THAN(MAXVALUE));
    SELECT * FROM t4 WHERE id = 7;
    SELECT `id`, `msg` FROM `xt1` WHERE `id` = 7
    CREATE TABLE rt1 (id INT KEY NOT NULL, msg VARCHAR(32))
    ENGINE=federated connection='mysql://root@host1/test/sales';
    
    CREATE TABLE rt2 (id INT KEY NOT NULL, msg VARCHAR(32))
    ENGINE=federated connection='mysql://root@host2/test/sales';
    
    CREATE TABLE rt3 (id INT KEY NOT NULL, msg VARCHAR(32))
    ENGINE=federated connection='mysql://root@host3/test/sales';
    CREATE TABLE t5 (
    id INT KEY NOT NULL,
    msg VARCHAR(32))
    ENGINE=connect table_type=MYSQL tabname='rt%s'
    PARTITION BY RANGE COLUMNS(id) (
    PARTITION `1` VALUES LESS THAN(10),
    PARTITION `2` VALUES LESS THAN(50),
    PARTITION `3` VALUES LESS THAN(MAXVALUE));
    CREATE TABLE t6 (
    id INT KEY NOT NULL,
    msg VARCHAR(32))
    ENGINE=connect table_type=MYSQL
    option_list='connect=mysql://root@host%s/test/sales'
    PARTITION BY RANGE COLUMNS(id) (
    PARTITION `1` VALUES LESS THAN(10),
    PARTITION `2` VALUES LESS THAN(50),
    PARTITION `3` VALUES LESS THAN(MAXVALUE));
    For rt1: connection='mysql://root:tinono@127.0.0.1:3307/test/xt1'
    For rt2: connection='mysql://foo:foopass@denver/dbemp/xt2'
    For rt3: connection='mysql://root@huston :5505/test/tabx'
    CREATE TABLE t7 (
    id INT KEY NOT NULL,
    msg VARCHAR(32))
    ENGINE=connect table_type=MYSQL
    option_list='connect=mysql://%s'
    PARTITION BY RANGE COLUMNS(id) (
    PARTITION `root:tinono@127.0.0.1:3307/test/xt1` VALUES LESS THAN(10),
    PARTITION `foo:foopass@denver/dbemp/xt2` VALUES LESS THAN(50),
    PARTITION `root@huston :5505/test/tabx` VALUES LESS THAN(MAXVALUE));
    CREATE SERVER `server_one` FOREIGN DATA WRAPPER 'mysql'
    OPTIONS
    (HOST '127.0.0.1',
    DATABASE 'test',
    USER 'root',
    PASSWORD 'tinono',
    PORT 3307);
    CREATE TABLE t8 (
    id INT KEY NOT NULL,
    msg VARCHAR(32))
    ENGINE=connect table_type=MYSQL
    option_list='connect=server_%s'
    PARTITION BY RANGE COLUMNS(id) (
    PARTITION `one/xt1` VALUES LESS THAN(10),
    PARTITION `two/xt2` VALUES LESS THAN(50),
    PARTITION `three/tabx` VALUES LESS THAN(MAXVALUE));
    CREATE TABLE t9 (
    id INT NOT NULL,
    msg VARCHAR(32),
    branch CHAR(16) DEFAULT 'main' special=PARTID,
    INDEX XID(id))
    ENGINE=connect table_type=MYSQL
    option_list='connect=server_%s/sales'
    PARTITION BY RANGE COLUMNS(id) (
    PARTITION `main` VALUES IN('main'),
    PARTITION `east` VALUES IN('east'),
    PARTITION `west` VALUES IN('west'));
    UPDATE t2 SET msg = 'quatre' WHERE id = 4;
    UPDATE t2 SET id = 41 WHERE msg = 'four';
    DELETE FROM t2 WHERE id = 4;
    INSERT INTO t2 VALUES(41, 'four');

    ALL

    22

    Using where

    JdbcInterface

    Used to make the connection with available drivers the standard way.

    ApacheInterface

    Based on the Apache common-dbcp2 package this interface enables making connections to DBCP data sources with any JDBC drivers.

    MariadbInterface

    Makes connection to a MariaDB data source.

    MysqlInterface

    Makes connection to a Mysql data source. Must be used with a MySQL driver that implements data sources.

    OracleInterface

    Makes connection to an Oracle data source.

    PostgresqlInterface

    Makes connection to a Postgresql data source.

    URL

    That is the URL that you specified in the CONNECTION option.

    User

    As specified in the OPTION_LIST or NULL if not specified.

    Password

    As specified in the OPTION_LIST or NULL if not specified.

    Scrollable

    To be specified in the option list. Determines the cursor type: no= forward_only or yes=scroll_insensitive.

    Block_size

    Will be used to set the statement fetch size.

    testuuid

    id

    1111

    uuid

    2147483647

    testuuid

    msg

    12

    text

    2147483647

    4b173ee1-1488-4355-a7ed-62ba59c2b3e7

    First

    6859f850-94a7-4903-8d3c-fc3c874fc274

    Second

    4b173ee1-1488-4355-a7ed-62ba59c2b3e7

    First

    6859f850-94a7-4903-8d3c-fc3c874fc274

    Second

    2f835fb8-73b0-42f3-a1d3-8a532b38feca

    inserted

    null

    8fc0a30e-dc66-4b95-ba57-497a161f4180

    random

    connect_java_wrapper
    mariadb-dump
    MDEV-11238
    CMake Error at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:148 (message): 
      Could NOT find Java (missing: Java_JAR_EXECUTABLE Java_JAVAC_EXECUTABLE 
      Java_JAVAH_EXECUTABLE Java_JAVADOC_EXECUTABLE)
    export JAVA_HOME=/usr/lib/jvm/java-8-oracle
    set global connect_jvm_path="/usr/lib/jvm/java-8-oracle/jre/lib/i386/client"
    set global connect_jvm_path="/usr/lib/jvm/java-8-oracle/jre/lib/i386/server"
    CREATE FUNCTION envar RETURNS STRING soname 'ha_connect.so';
    SELECT envar('CLASSPATH');
    olivier@olivier-Aspire-8920:~$ sudo /usr/local/mysql/bin/mysqld -u root --console --default-storage-engine=myisam --skip-innodb --connect_jvm_path="/usr/lib/jvm/java-8-oracle/jre/lib/i386/server" --connect_class_path="/home/olivier/mariadb/10.1/storage/connect:/media/olivier/SOURCE/mysql-connector-java-6.0.2/mysql-connector-java-6.0.2-bin.jar"
    CREATE TABLE boys (
    name CHAR(12),
    city CHAR(12),
    birth DATE,
    hired DATE);
    CREATE TABLE jboys ENGINE=CONNECT table_type=JDBC tabname=boys
    CONNECTION='jdbc:mysql://localhost/dbname?user=root';
    CREATE TABLE juuid ENGINE=CONNECT table_type=JDBC tabname=testuuid
    CONNECTION='jdbc:postgresql:test?user=postgres&password=pwd';
    CREATE server 'post1' FOREIGN DATA wrapper 'postgresql' OPTIONS (
    HOST 'localhost',
    DATABASE 'test',
    USER 'postgres',
    PASSWORD 'pwd',
    PORT 0,
    SOCKET '',
    OWNER 'postgres');
    CREATE TABLE juuid ENGINE=CONNECT table_type=JDBC CONNECTION='post1' tabname=testuuid;
    CREATE TABLE juuid ENGINE=CONNECT table_type=JDBC CONNECTION='post1/testuuid';
    CREATE server 'oracle' FOREIGN DATA wrapper 'oracle' OPTIONS (
    HOST 'jdbc:oracle:thin:@localhost:1521:xe',
    DATABASE 'SYSTEM',
    USER 'system',
    PASSWORD 'manager',
    PORT 0,
    SOCKET '',
    OWNER 'SYSTEM');
    CREATE TABLE empor ENGINE=CONNECT table_type=JDBC CONNECTION='oracle/HR.EMPLOYEES';
    CREATE TABLE jt1 ENGINE=CONNECT table_type=JDBC
    CONNECTION='jdbc:postgresql://localhost/mtr' dbname=PUBLIC tabname=t1
    option_list='User=mtr,Password=mtr';
    SELECT id, name, phone FROM jbig LIMIT 10;
    SELECT id, name, phone FROM big;
    CREATE TABLE jboys 
    ENGINE=CONNECT table_type=JDBC tabname='boys'
    CONNECTION='jdbc:mariadb://localhost/connect?user=root&useSSL=false'
    option_list='Wrapper=MariadbInterface,Scrollable=1';
    set global time_zone = '+2:00';
    Table « public.testuuid »
     Column | Type | Default
    --------+------+--------------------
     id     | uuid | uuid_generate_v4()
     msg    | text |
    CREATE OR REPLACE TABLE juuidcol ENGINE=CONNECT table_type=JDBC tabname=testuuid catfunc=columns
    CONNECTION='jdbc:postgresql:test?user=postgres&password=pwd';
    SELECT TABLE_NAME "Table", COLUMN_NAME "Column", data_type "Type", 
      type_name "Name", column_size "Size" 
     FROM juuidcol;
    CREATE TABLE juuid ENGINE=CONNECT TABLE_TYPE=JDBC TABNAME=testuuid
    CONNECTION='jdbc:postgresql:test?user=postgres&password=pwd';
    CREATE TABLE `juuid` (
      `id` CHAR(36) DEFAULT NULL,
      `msg` VARCHAR(8192) DEFAULT NULL
    ) ENGINE=CONNECT DEFAULT CHARSET=latin1 CONNECTION='jdbc:postgresql:test?user=postgres&password=pwd' `TABLE_TYPE`='JDBC' `TABNAME`='testuuid';
    INSERT INTO juuid(msg) VALUES('First');
    INSERT INTO juuid(msg) VALUES('Second');
    SELECT * FROM juuid;
    INSERT INTO juuid
      VALUES('2f835fb8-73b0-42f3-a1d3-8a532b38feca','inserted');
    INSERT INTO juuid VALUES(NULL,'null');
    INSERT INTO juuid VALUES('','random');
    SELECT * FROM juuid;
    SELECT * FROM juuid WHERE id = '2f835fb8-73b0-42f3-a1d3-8a532b38feca';
    DELETE FROM juuid WHERE id = '2f835fb8-73b0-42f3-a1d3-8a532b38feca';
    SELECT * FROM juuid WHERE id like '%42f3%';
    SELECT id, msg FROM testuuid WHERE id LIKE '%42f3%'
    set connect_cond_push=0;
    see this page

    JSON Sample Files

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Expense.json

    OEM example

    This is an example showing how an OEM table can be implemented. It is out of the scope of this document to explain how it works and to be a full guide on writing OEM tables for CONNECT.

    tabfic.h

    The header File tabfic.h:

    tabfic.cpp

    The source File tabfic.cpp:

    tabfic.def

    The file tabfic.def: (required only on Windows)

    JSON UDFs in a separate library

    Although the JSON UDF’s can be nicely included in the CONNECT library module, there are cases when you may need to have them in a separate library.

    This is when CONNECT is compiled embedded, or if you want to test or use these UDF’s with other MariaDB versions not including them.

    To make it, you need to have access to the last MariaDB source code. Then, make a project containing these files:

    1. jsonudf.cpp

    2. json.cpp

    3. value.cpp

    4. osutil.c

    jsonutil.cpp is not distributed with the source code, you will have to make it from the following:

    You can create the file by copy/paste from the above.

    Set all the additional include directories to the MariaDB include directories used in plugin compiling plus the reference of the storage/connect directories, and compile like any other UDF giving any name to the made library module (I used jsonudf.dll on Windows)

    Then you can create the functions using this name as the soname parameter.

    There are some restrictions when using the UDF’s this way:

    • The connect_json_grp_size variable cannot be accessed. The group size is set to 100.

    • In case of error, warnings are replaced by messages sent to stderr.

    • No trace.

    This page is licensed: CC BY-SA / Gnu FDL

    CONNECT CSV and FMT Table Types

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    CSV Type

    Many source data files are formatted with variable length fields and records. The simplest format, known as CSV (Comma Separated Variables), has column fields separated by a separator character. By default, the separator is a comma but can be specified by the SEP_CHAR option as any character, for instance a semi-colon.

    If the CSV file first record is the list of column names, specifying theHEADER=1 option will skip the first record on reading. On writing, if the file is empty, the column names record is automatically written.

    For instance, given the following people.csv file:

    You can create the corresponding table by:

    Alternatively the engine can attempt to automatically detect the column names, data types and widths using:

    For CSV tables, the flag column option is the rank of the column into the file starting from 1 for the leftmost column. This is to enable having column displayed in a different order than in the file and/or to define the table specifying only some columns of the CSV file. For instance:

    In this case the command:

    will display the table as:

    name
    children
    birth

    Many applications produce CSV files having some fields quoted, in particular because the field text contains the separator character. For such files, specify the 'QUOTED=n' option to indicate the level of quoting and/or the 'QCHAR=c' to specify what is this eventual quoting character, which is" by default. Quoting with single quotes must be specified asQCHAR=''''. On writing, fields are quoted depending on the value of the quoting level, which is –1 by default meaning no quoting:

    Files written this way are successfully read by most applications including spreadsheets.

    Note 1: If only the QCHAR option is specified, the QUOTED option will default to 1.

    Note 2: For CSV tables whose separator is the tab character, specifysep_char='\t'.

    Note 3: When creating a table on an existing CSV file, you can let CONNECT analyze the file and make the column description. However, this is a not an elaborate analysis of the file and, for instance, DATE fields will not be recognized as such but are regarded as string fields.

    Note 4: The CSV parser only reads and buffers up to 4KB per row by default, rows longer than this is truncated when read from the file. If the rows are expected to be longer than this use lrecl to increase this. For example to set an 8KB maximum row read you would use lrecl=8192

    Restrictions on CSV Tables

    • If is set to the path of some directory, then CSV tables can only be created with files in that directory.

    FMT Type

    FMT tables handle files of various formats that are an extension of the concept of CSV files. CONNECT supports these files providing all lines have the same format and that all fields present in all records are recognizable (optional fields must have recognizable delimiters). These files are made by specific application and CONNECT handles them in read only mode.

    FMT tables must be created as CSV tables, specifying their type as FMT. In addition, each column description must be added to its format specification.

    Column Format Specification of FMT tables

    The input format for each column is specified as a FIELD_FORMAT option. A simple example is:

    In the above example, the format for this (1st) field is ' %n%s%n'. Note that the blank character at the beginning of this format is significant. No trailing blank should be specified in the column formats.

    The syntax and meaning of the column input format is the one of the C scanf function.

    However, CONNECT uses the input format in a specific way. Instead of using it to directly store the input value in the column buffer; it uses it to delimit the sub string of the input record that contains the corresponding column value. Retrieving this value is done later by the column functions as for standard CSV files.

    This is why all column formats are made of five components:

    1. An eventual description of what is met and ignored before the column value.

    2. A marker of the beginning of the column value written as %n.

    3. The format specification of the column value itself.

    4. A marker of the end of the column value written as %n

    For example, taking the file funny.txt:

    You can make a table fmtsample with 4 columns ID, NAME, DEPNO and SALARY, using the Create Table statement and column formats:

    Field 1 is an integer (%d) with eventual leading blanks.

    Field 2 is separated from field 1 by optional blanks, a comma, and other optional blanks and is between single quotes. The leading quote is included in component 1 of the column format, followed by the %n marker. The column value is specified as %[^'] meaning to keep any characters read until a quote is met. The ending marker (%n) is followed by the 5th component of the column format, the single quote that follows the column value.

    Field 3, also separated by a comma, is a number preceded by a pound sign.

    Field 4, separated by a semicolon eventually surrounded by blanks, is a number with an optional decimal point (%f).

    This table are displayed as:

    ID
    NAME
    DEPNO
    SALARY

    Optional Fields

    To be recognized, a field normally must be at least one character long. For instance, a numeric field must have at least one digit, or a character field cannot be void. However many existing files do not follow this format.

    Let us suppose for instance that the preceding example file could be:

    This will display an error message such as “Bad format line x field y of &#xNAN;FMTSAMPLE”. To avoid this and accept these records, the corresponding fields must be specified as "optional". In the above example, fields 2 and 3 can have null values (in lines 3 and 2 respectively). To specify them as optional, their format must be terminated by %m (instead of the second %n). A statement such as this can do the table creation:

    Note that, because the statement must be terminated by %m with no additional characters, skipping the ending quote of field 2 was moved from the end of the second column format to the beginning of the third column format.

    The table result is:

    ID
    NAME
    DEPNO
    SALARY

    Missing fields are replaced by null values if the column is nullable, blanks for character strings and 0 for numeric fields if it is not.

    Note 1: Because the formats are specified between quotes, quotes belonging to the formats must be doubled or escaped to avoid a CREATE TABLE statement syntax error.

    Note 2: Characters separating columns can be included as well in component 5 of the preceding column format or in component 1 of the succeeding column format but for blanks, which should be always included in component 1 of the succeeding column format because line trailing blanks can be sometimes lost. This is also mandatory for optional fields.

    Note 3: Because the format is mainly used to find the sub-string corresponding to a column value, the field specification does not necessarily match the column type. For instance supposing a table contains two integer columns, NBONE and NBTWO, the two lines describing these columns could be:

    The first one specifies a required integer field (%d), the second line describes a field that can be an integer, but can be replaced by a "-" (or any other) character. Specifying the format specification for this column as a character field (%s) enables to recognize it with no error in all cases. Later on, this field are converted to integer by the column read function, and a null 0 value are generated for field specified in their format as non-numeric.

    Bad Record Error Processing

    When no match if found for a column field the process aborts with a message such as:

    This can mean as well that one line of the input line is ill formed or that the column format for this field has been wrongly specified. When you know that your file contains records that are ill formatted and should be eliminated from normal processing, set the “maxerr” option of the CREATE TABLE statement, for instance:

    This will indicate that no error message be raised for the 100 first wrong lines. You can set Maxerr to a number greater than the number of wrong lines in your files to ignore them and get no errors.

    Additionally, the “accept” option permit to keep those ill formatted lines with the bad field, and all succeeding fields of the record, nullified. If “accept” is specified without “maxerr”, all ill formatted lines are accepted.

    Note: This error processing also applies to CSV tables.

    Fields Containing a Formatted Date

    A special case is one of columns containing a formatted date. In this case, two formats must be specified:

    1. The field recognition format used to delimit the date in the input record.

    2. The date format used to interpret the date.

    3. The field length option if the date representation is different than the standard type size.

    For example, let us suppose we have a web log source file containing records such as:

    The create table statement shall be like this:

    Note 1: Here, field_length=20 was necessary because the default size for datetime columns is only 19. The lrecl=400 was also specified because the actual file contains more information in each records making the record size calculated by default too small.

    Note 2: The file name could have been specified as'e:/data/token/Websamp.dat'.

    Note 3: FMT tables are currently read only.

    This page is licensed: CC BY-SA / Gnu FDL

    Galera
    Galera Cluster
    [
      {
        "WHO": "Joe",
        "WEEK": [
          {
            "NUMBER": 3,
            "EXPENSE": [
              {
                "WHAT": "Beer",
                "AMOUNT": 18.00
              },
              {
                "WHAT": "Food",
                "AMOUNT": 12.00
              },
              {
                "WHAT": "Food",
                "AMOUNT": 19.00
              },
              {
                "WHAT": "Car",
                "AMOUNT": 20.00
              }
            ]
          },
          {
            "NUMBER": 4,
            "EXPENSE": [
              {
                "WHAT": "Beer",
                "AMOUNT": 19.00
              },
              {
                "WHAT": "Beer",
                "AMOUNT": 16.00
              },
              {
                "WHAT": "Food",
                "AMOUNT": 17.00
              },
              {
                "WHAT": "Food",
                "AMOUNT": 17.00
              },
              {
                "WHAT": "Beer",
                "AMOUNT": 14.00
              }
            ]
          },
          {
            "NUMBER": 5,
            "EXPENSE": [
              {
                "WHAT": "Beer",
                "AMOUNT": 14.00
              },
              {
                "WHAT": "Food",
                "AMOUNT": 12.00
              }
            ]
          }
        ]
      },
      {
        "WHO": "Beth",
        "WEEK": [
          {
            "NUMBER": 3,
            "EXPENSE": [
              {
                "WHAT": "Beer",
                "AMOUNT": 16.00
              }
            ]
          },
          {
            "NUMBER": 4,
            "EXPENSE": [
              {
                "WHAT": "Food",
                "AMOUNT": 17.00
              },
              {
                "WHAT": "Beer",
                "AMOUNT": 15.00
              }
            ]
          },
          {
            "NUMBER": 5,
            "EXPENSE": [
              {
                "WHAT": "Food",
                "AMOUNT": 12.00
              },
              {
                "WHAT": "Beer",
                "AMOUNT": 20.00
              }
            ]
          }
        ]
      },
      {
        "WHO": "Janet",
        "WEEK": [
          {
            "NUMBER": 3,
            "EXPENSE": [
              {
                "WHAT": "Car",
                "AMOUNT": 19.00
              },
              {
                "WHAT": "Food",
                "AMOUNT": 18.00
              },
              {
                "WHAT": "Beer",
                "AMOUNT": 18.00
              }
            ]
          },
          {
            "NUMBER": 4,
            "EXPENSE": [
              {
                "WHAT": "Car",
                "AMOUNT": 17.00
              }
            ]
          },
          {
            "NUMBER": 5,
            "EXPENSE": [
              {
                "WHAT": "Beer",
                "AMOUNT": 14.00
              },
              {
                "WHAT": "Car",
                "AMOUNT": 12.00
              },
              {
                "WHAT": "Beer",
                "AMOUNT": 19.00
              },
              {
                "WHAT": "Food",
                "AMOUNT": 12.00
              }
            ]
          }
        ]
      }
    ]
    plugutil.c
  • maputil.cpp

  • jsonutil.cpp

  • (or
    %m
    for optional fields).
  • An eventual description of what is met after the column value (not valid is %m was used).

  • Archibald

    3

    2001-05-17

    Nabucho

    2

    2003-08-12

    0

    The fields between quotes are read and the quotes discarded. On writing, fields are quoted only if they contain the separator character or begin with the quoting character. If they contain the quoting character, it are doubled.

    1

    Only text fields are written between quotes, except null fields. This includes also the column names of an eventual header.

    2

    All fields are written between quotes, except null fields.

    3

    All fields are written between quotes, including null fields.

    12345

    BERTRAND

    200

    5009.13

    56

    POIROT-DELMOTTE

    4256

    18009.00

    345

    TRUCMUCHE

    67

    12345

    BERTRAND

    200

    5,009.13

    56

    POIROT-DELMOTTE

    NULL

    18,009.00

    345

    NULL

    67

    secure_file_priv

    19000.25

    19,000.25

    // TABFIC.H     Olivier Bertrand    2008-2010
    // External table type to read FIC files
    
    #define TYPE_AM_FIC  (AMT)129
    
    typedef class FICDEF *PFICDEF;
    typedef class TDBFIC *PTDBFIC;
    typedef class FICCOL *PFICCOL;
    
    /* ------------------------- FIC classes ------------------------- */
    
    /*******************************************************************/
    /*  FIC: OEM table to read FIC files.                              */
    /*******************************************************************/
    
    /*******************************************************************/
    /*  This function is exported from the Tabfic.dll 			 */
    /*******************************************************************/
    extern "C" PTABDEF __stdcall GetFIC(PGLOBAL g, void *memp);
    
    /*******************************************************************/
    /*  FIC table definition class.                                    */
    /*******************************************************************/
    class FICDEF : public DOSDEF {        /* Logical table description */
      friend class TDBFIC;
     public:
      // Constructor
      FICDEF(void) {Pseudo = 3;}
    
      // Implementation
      virtual const char *GetType(void) {return "FIC";}
    
      // Methods
      virtual BOOL DefineAM(PGLOBAL g, LPCSTR am, int poff);
      virtual PTDB GetTable(PGLOBAL g, MODE m);
    
     protected:
      // No Members
    }; // end of class FICDEF
    
    /*******************************************************************/
    /*  This is the class declaration for the FIC table.               */
    /*******************************************************************/
    class TDBFIC : public TDBFIX {
      friend class FICCOL;
     public:
      // Constructor
      TDBFIC(PFICDEF tdp);
    
      // Implementation
      virtual AMT   GetAmType(void) {return TYPE_AM_FIC;}
    
      // Methods
      virtual void  ResetDB(void);
      virtual int   RowNumber(PGLOBAL g, BOOL b = FALSE);
    
      // Database routines
      virtual PCOL  MakeCol(PGLOBAL g, PCOLDEF cdp, PCOL cprec, int n);
      virtual BOOL  OpenDB(PGLOBAL g, PSQL sqlp);
      virtual int   ReadDB(PGLOBAL g);
      virtual int   WriteDB(PGLOBAL g);
      virtual int   DeleteDB(PGLOBAL g, int irc);
    
     protected:
      // Members
      int ReadMode;				  // To read soft deleted lines
      int Rows;                           // Used for RowID
    }; // end of class TDBFIC
    
    /*******************************************************************/
    /*  Class FICCOL: for Monetary columns.                            */
    /*******************************************************************/
    class FICCOL : public DOSCOL {
     public:
      // Constructors
      FICCOL(PGLOBAL g, PCOLDEF cdp, PTDB tdbp, 
             PCOL cprec, int i, PSZ am = "FIC");
    
      // Implementation
      virtual int  GetAmType(void) {return TYPE_AM_FIC;}
    
      // Methods
      virtual void ReadColumn(PGLOBAL g);
    
     protected:
      // Members
      char Fmt;					  // The column format
    }; // end of class FICCOL
    /*******************************************************************/
    /*  FIC: OEM table to read FIC files.                              */
    /*******************************************************************/
    #if defined(WIN32)
    #define WIN32_LEAN_AND_MEAN      // Exclude rarely-used stuff
    #include <windows.h>
    #endif   // WIN32
    #include "global.h"
    #include "plgdbsem.h"
    #include "reldef.h"
    #include "filamfix.h"
    #include "tabfix.h"
    #include "tabfic.h"
    
    int TDB::Tnum;
    int DTVAL::Shift;
    
    /*******************************************************************/
    /*  Initialize the CSORT static members.                           */
    /*******************************************************************/
    int    CSORT::Limit = 0;
    double CSORT::Lg2 = log(2.0);
    size_t CSORT::Cpn[1000] = {0};      /* Precalculated cmpnum values */
    
    /* ------------- Implementation of the FIC subtype --------------- */
    
    /*******************************************************************/
    /*  This function is exported from the DLL.                        */
    /*******************************************************************/
    PTABDEF __stdcall GetFIC(PGLOBAL g, void *memp)
    {
      return new(g, memp) FICDEF;
    } // end of GetFIC
    
    /* -------------- Implementation of the FIC classes -------------- */
    
    /*******************************************************************/
    /*  DefineAM: define specific AM block values from FIC file.       */
    /*******************************************************************/
    BOOL FICDEF::DefineAM(PGLOBAL g, LPCSTR am, int poff)
    {
      ReadMode = GetIntCatInfo("Readmode", 0);
    
      // Indicate that we are a BIN format
      return DOSDEF::DefineAM(g, "BIN", poff);
    } // end of DefineAM
    
    /*******************************************************************/
    /*  GetTable: makes a new TDB of the proper type.                  */
    /*******************************************************************/
    PTDB FICDEF::GetTable(PGLOBAL g, MODE m)
    {
      return new(g) TDBFIC(this);
    } // end of GetTable
    
    /* --------------------------------------------------------------- */
    
    /*******************************************************************/
    /*  Implementation of the TDBFIC class.                            */
    /*******************************************************************/
    TDBFIC::TDBFIC(PFICDEF tdp) : TDBFIX(tdp, NULL)
    {
      ReadMode = tdp->ReadMode;
      Rows = 0;
    } // end of TDBFIC constructor
    
    /*******************************************************************/
    /*  Allocate FIC column description block.                         */
    /*******************************************************************/
    PCOL TDBFIC::MakeCol(PGLOBAL g, PCOLDEF cdp, PCOL cprec, int n)
    {
      PCOL colp;
    
      // BINCOL is alright except for the Monetary format
      if (cdp->GetFmt() && toupper(*cdp->GetFmt()) == 'M')
        colp = new(g) FICCOL(g, cdp, this, cprec, n);
      else
        colp = new(g) BINCOL(g, cdp, this, cprec, n);
    
      return colp;
    } // end of MakeCol
    
    /*******************************************************************/
    /*  RowNumber: return the ordinal number of the current row.       */
    /*******************************************************************/
    int TDBFIC::RowNumber(PGLOBAL g, BOOL b)
    {
      return (b) ? Txfp->GetRowID() : Rows;
    } // end of RowNumber
    
    /*******************************************************************/
    /*  FIC Access Method reset table for re-opening.                  */
    /*******************************************************************/
    void TDBFIC::ResetDB(void)
    {
      Rows = 0;
      TDBFIX::ResetDB();
    } // end of ResetDB
    
    /*******************************************************************/
    /*  FIC Access Method opening routine.                             */
    /*******************************************************************/
    BOOL TDBFIC::OpenDB(PGLOBAL g, PSQL sqlp)
    {
      if (Use == USE_OPEN) {
        // Table already open, just replace it at its beginning.          
        return TDBFIX::OpenDB(g);
        } // endif use
    
      if (Mode != MODE_READ) {
        // Currently FIC tables cannot be modified.
        strcpy(g->Message, "FIC tables are read only");
        return TRUE;
        } // endif Mode
    
      /*****************************************************************/
      /*  Physically open the FIC file.                                */
      /*****************************************************************/
      if (TDBFIX::OpenDB(g))
        return TRUE;
    
      Use = USE_OPEN;
      return FALSE;
    } // end of OpenDB
    
    /*******************************************************************/
    /*  ReadDB: Data Base read routine for FIC access method.          */
    /*******************************************************************/
    int TDBFIC::ReadDB(PGLOBAL g)
    {
      int rc;
    
      /*****************************************************************/
      /*  Now start the reading process.                               */
      /*****************************************************************/
      do {
        rc = TDBFIX::ReadDB(g);
        } while (rc == RC_OK && ((ReadMode == 0 && *To_Line == '*') ||
    				     (ReadMode == 2 && *To_Line != '*')));
    
      Rows++;
      return rc;
    } // end of ReadDB
    
    /*******************************************************************/
    /*  WriteDB: Data Base write routine for FIC access methods.       */
    /*******************************************************************/
    int TDBFIC::WriteDB(PGLOBAL g)
    {
      strcpy(g->Message, "FIC tables are read only");
      return RC_FX;
    } // end of WriteDB
    
    /*******************************************************************/
    /*  Data Base delete line routine for FIC access methods.          */
    /*******************************************************************/
    int TDBFIC::DeleteDB(PGLOBAL g, int irc)
    {
      strcpy(g->Message, "Delete not enabled for FIC tables");
      return RC_FX;
    } // end of DeleteDB
    
    // ---------------------- FICCOL functions --------------------------
    
    /*******************************************************************/
    /*  FICCOL public constructor.                                     */
    /*******************************************************************/
    FICCOL::FICCOL(PGLOBAL g, PCOLDEF cdp, PTDB tdbp, PCOL cprec, int i,
                   PSZ am) : DOSCOL(g, cdp, tdbp, cprec, i, am)
    {
      // Set additional FIC access method information for column.
      Fmt = toupper(*cdp->GetFmt());    // Column format
    } // end of FICCOL constructor
    
    /*******************************************************************/
    /*  Handle the monetary value of this column. It is a big integer  */
    /*  that represents the value multiplied by 1000.                  */
    /*  In this function we translate it to a double float value.      */                 
    /*******************************************************************/
    void FICCOL::ReadColumn(PGLOBAL g)
    {
      char   *p;
      int     rc;
      uint    n;
      double  fmon;
      PTDBFIC tdbp = (PTDBFIC)To_Tdb;
    
      /*****************************************************************/
      /*  If physical reading of the line was deferred, do it now.     */
      /*****************************************************************/
      if (!tdbp->IsRead())
        if ((rc = tdbp->ReadBuffer(g)) != RC_OK) {
          if (rc == RC_EF)
            sprintf(g->Message, MSG(INV_DEF_READ), rc);
    
          longjmp(g->jumper[g->jump_level], 11);
          } // endif
    
      p = tdbp->To_Line + Deplac;
    
      /*****************************************************************/
      /*  Set Value from the line field.                               */
      /*****************************************************************/
      if (*(SHORT*)(p + 8) < 0) {
        n = ~*(SHORT*)(p + 8);
        fmon = (double)n;
        fmon *= 4294967296.0;
        n = ~*(int*)(p + 4);
        fmon += (double)n;
        fmon *= 4294967296.0;
        n = ~*(int*)p;
        fmon += (double)n;
        fmon++;
        fmon /= 1000000.0;
        fmon = -fmon;
      } else {
        fmon = ((double)*(USHORT*)(p + 8));
        fmon *= 4294967296.0;
        fmon += ((double)*(ULONG*)(p + 4));
        fmon *= 4294967296.0;
        fmon += ((double)*(ULONG*)p);
        fmon /= 1000000.0;
      } // enif neg
    
      Value->SetValue(fmon);
    } // end of ReadColumn
    LIBRARY     TABFIC
    DESCRIPTION 'FIC files'
    EXPORTS
       GetFIC       @1
    #include "my_global.h"
    #include "mysqld.h"
    #include "plugin.h"
    #include <stdlib.h>
    #include <string.h>
    #include <stdio.h>
    #include <errno.h>
    
    #include "global.h"
    
    extern "C" int GetTraceValue(void) { return 0; }
    uint GetJsonGrpSize(void) { return 100; }
    
    /***********************************************************************/
    /*  These replace missing function of the (not used) DTVAL class.      */
    /***********************************************************************/
    typedef struct _datpar   *PDTP;
    PDTP MakeDateFormat(PGLOBAL, PSZ, bool, bool, int) { return NULL; }
    int ExtractDate(char*, PDTP, int, int val[6]) { return 0; }
    
    
    #ifdef __WIN__
    my_bool CloseFileHandle(HANDLE h)
    {
    	return !CloseHandle(h);
    } /* end of CloseFileHandle */
    
    #else  /* UNIX */
    my_bool CloseFileHandle(HANDLE h)
    {
    	return (close(h)) ? TRUE : FALSE;
    }  /* end of CloseFileHandle */
    
    int GetLastError()
    {
    	return errno;
    }  /* end of GetLastError */
    
    #endif  // UNIX
    
    /***********************************************************************/
    /*  Program for sub-allocating one item in a storage area.             */
    /*  Note: This function is equivalent to PlugSubAlloc except that in   */
    /*  case of insufficient memory, it returns NULL instead of doing a    */
    /*  long jump. The caller must test the return value for error.        */
    /***********************************************************************/
    void *PlgDBSubAlloc(PGLOBAL g, void *memp, size_t size)
    {
      PPOOLHEADER pph;                        // Points on area header.
    
      if (!memp)  	//  Allocation is to be done in the Sarea
        memp = g->Sarea;
    
      size = ((size + 7) / 8) * 8;  /* Round up size to multiple of 8 */
      pph = (PPOOLHEADER)memp;
    
      if ((uint)size > pph->FreeBlk) { /* Not enough memory left in pool */
        sprintf(g->Message,
         "Not enough memory in Work area for request of %d (used=%d free=%d)",
    			(int)size, pph->To_Free, pph->FreeBlk);
        return NULL;
      } // endif size
    
      // Do the suballocation the simplest way
      memp = MakePtr(memp, pph->To_Free);   // Points to sub_allocated block
      pph->To_Free += size;                 // New offset of pool free block
      pph->FreeBlk -= size;                 // New size   of pool free block
    
      return (memp);
    } // end of PlgDBSubAlloc
    Name;birth;children
    "Archibald";17/05/01;3
    "Nabucho";12/08/03;2
    CREATE TABLE people (
      name CHAR(12) NOT NULL,
      birth DATE NOT NULL date_format='DD/MM/YY',
      children SMALLINT(2) NOT NULL)
    ENGINE=CONNECT table_type=CSV file_name='people.csv'
    header=1 sep_char=';' quoted=1;
    CREATE TABLE people
    ENGINE=CONNECT table_type=CSV file_name='people.csv'
    header=1 sep_char=';' quoted=1;
    CREATE TABLE people (
      name CHAR(12) NOT NULL,
      children SMALLINT(2) NOT NULL flag=3,
      birth DATE NOT NULL flag=2 date_format='DD/MM/YY')
    ENGINE=CONNECT table_type=CSV file_name='people.csv'
    header=1 sep_char=';' quoted=1;
    SELECT * FROM people;
    IP Char(15) not null field_format=' %n%s%n',
    12345,'BERTRAND',#200;5009.13
     56, 'POIROT-DELMOTTE' ,#4256 ;18009
    345 ,'TRUCMUCHE' , #67; 19000.25
    CREATE TABLE FMTSAMPLE (
      ID INTEGER(5) NOT NULL field_format=' %n%d%n',
      NAME CHAR(16) NOT NULL field_format=' , ''%n%[^'']%n''',
      DEPNO INTEGER(4) NOT NULL field_format=' , #%n%d%n',
      SALARY DOUBLE(12,2) NOT NULL field_format=' ; %n%f%n')
    ENGINE=CONNECT table_type=FMT file_name='funny.txt';
    12345,'BERTRAND',#200;5009.13
     56, 'POIROT-DELMOTTE' ,# ;18009
    345 ,'' , #67; 19000.25
    CREATE TABLE FMTAMPLE (
      ID INTEGER(5) NOT NULL field_format=' %n%d%n',
      NAME CHAR(16) NOT NULL field_format=' , ''%n%[^'']%m',
      DEPNO INTEGER(4) field_format=''' , #%n%d%m',
      SALARY DOUBLE(12,2) field_format=' ; %n%f%n')
    ENGINE=CONNECT table_type=FMT file_name='funny.txt';
    NBONE integer(5) not null field_format=' %n%d%n',
    NBTWO integer(5) field_format=' %n%s%n',
    Bad format line 3 field 4 of funny.txt
    Option_list='maxerr=100'
    165.91.215.31 - - [17/Jul/2001:00:01:13 -0400] - "GET /usnews/home.htm HTTP/1.1" 302
    CREATE TABLE WEBSAMP (
      IP CHAR(15) NOT NULL field_format='%n%s%n',
      DATE DATETIME NOT NULL field_format=' - - [%n%s%n -0400]'
      date_format='DD/MMM/YYYY:hh:mm:ss' field_length=20,
      FILE CHAR(128) NOT NULL field_format=' - "GET %n%s%n',
      HTTP DOUBLE(4,2) NOT NULL field_format=' HTTP/%n%f%n"',
      NBONE INT(5) NOT NULL field_format=' %n%d%n')
    ENGINE=CONNECT table_type=FMT lrecl=400
    file_name='e:\\data\\token\\Websamp.dat';

    CONNECT Data Types

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Many data types make no or little sense when applied to plain files. This why CONNECT supports only a restricted set of data types. However, ODBC, JDBC or MYSQL source tables may contain data types not supported by CONNECT. In this case, CONNECT makes an automatic conversion to a similar supported type when it is possible.

    The data types currently supported by CONNECT are:

    Type name
    Description
    Used for

    TYPE_STRING

    This type corresponds to what is generally known as or by database users, or as strings by programmers. Columns containing characters have a maximum length but the character string is of fixed or variable length depending on the file format.

    The DATA_CHARSET option must be used to specify the character set used in the data source or file. Note that, unlike usually with MariaDB, when a multi-byte character set is used, the column size represents the number of bytes the column value can contain, not the number of characters.

    TYPE_INT

    The type contains signed integer numeric 4-byte values (the int of the C language) ranging from –2,147,483,648 to 2,147,483,647 for signed type and 0 to 4,294,967,295 for unsigned type.

    TYPE_SHORT

    The SHORT data type contains signed values (the short integer of the C language) ranging from –32,768 to 32,767 for signed type and 0 to 65,535 for unsigned type.

    TYPE_TINY

    The TINY data type contains values (the char of the C language) ranging from –128 to 127 for signed type and 0 to255 for unsigned type. For some table types, TYPE_TINY is used to represent Boolean values (0 is false, anything else is true).

    TYPE_BIGINT

    The data type contains signed integer 8-byte values (the long long of the C language) ranging from -9,223,372,036,854,775,808 to9,223,372,036,854,775,807 for signed type and from 0 to18,446,744,073,709,551,615 for unsigned type.

    Inside tables, the coding of all integer values depends on the table type. In tables represented by text files, the number is written in characters, while in tables represented by binary files (BIN or VEC) the number is directly stored in the binary representation corresponding to the platform.

    The length (or precision) specification corresponds to the length of the table field in which the value is stored for text files only. It is used to set the output field length for all table types.

    TYPE_DOUBLE

    The DOUBLE data type corresponds to the C language type, a floating-point double precision value coded with 8 bytes. Like for integers, the internal coding in tables depends on the table type, characters for text files, and platform binary representation for binary files.

    The length specification corresponds to the length of the table field in which the value is stored for text files only. The scale (was_precision_) is the number of decimal digits written into text files. For binary table types (BIN and VEC) this does not apply. The length and_scale_ specifications are used to set the output field length and number of decimals for all types of tables.

    TYPE_DECIM

    The DECIMAL data type corresponds to what MariaDB or ODBC data sources call NUMBER, NUMERIC, or : a numeric value with a maximum number of digits (the precision) some of them eventually being decimal digits (the scale). The internal coding in CONNECT is a character representation of the number. For instance:

    This defines a column colname as a number having a precision of 14 and a scale of 6. Supposing it is populated by:

    The internal representation of it are the character string-2658.740000. The way it is stored in a file table depends on the table type. The length field specification corresponds to the length of the table field in which the value is stored and is calculated by CONNECT from the_precision_ and the scale values. This length is precision plus 1 if_scale_ is not 0 (for the decimal point) plus 1 if this column is not unsigned (for the eventual minus sign). In fix formatted tables the number is right justified in the field of width length, for variable formatted tables, such as CSV, the field is the representing character string.

    Because this type is mainly used by CONNECT to handle numeric or decimal fields of ODBC, JDBC and MySQL table types, CONNECT does not provide decimal calculations or comparison by itself. This is why decimal columns of CONNECT tables cannot be indexed.

    DATE Data type

    Internally, date/time values are stored by CONNECT as a signed 4-byte integer. The value 0 corresponds to 01 January 1970 12:00:00 am coordinated universal time (). All other date/time values are represented by the number of seconds elapsed since or before midnight (00:00:00), 1 January 1970, to that date/time value. Date/time values before midnight 1 January 1970 are represented by a negative number of seconds.

    CONNECT handles dates from 13 December 1901, 20:45:52 to18 January 2038, 19:14:07.

    Although date and time information can be represented in both CHAR and INTEGER data types, the DATE data type has special associated properties. For each DATE value, CONNECT can store all or only some of the following information: century, year, month, day, hour, minute, and second.

    Date Format in Text Tables

    Internally, date/time values are handled as a signed 4-byte integer. But in text tables (type DOS, FIX, CSV, FMT, and DBF) dates are most of the time stored as a formatted character string (although they also can be stored as a numeric string representing their internal value). Because there are infinite ways to format a date, the format to use for decoding dates, as well as the field length in the file, must be associated to date columns (except when they are stored as the internal numeric value).

    Note that this associated format is used only to describe the way the temporal value is stored internally. This format is used both for output to decode the date in a SELECT statement as well as for input to encode the date in INSERT or UPDATE statements. However, what is kept in this value depends on the data type used in the column definition (all the MariaDB temporal values can be specified). When creating a table, the format is associated to a date column using the DATE_FORMAT option in the column definition, for instance:

    The SELECT query returns:

    Name
    Bday
    Btime

    The values of the INSERT statement must be specified using the standard MariaDB syntax and these values are displayed as MariaDB temporal values. Sure enough, the column formats apply only to the way these values are represented inside the CSV files. Here, the inserted record are:

    Note: The field_length option exists because the MariaDB syntax does not allow specifying the field length between parentheses for temporal column types. If not specified, the field length is calculated from the date format (sometimes as a max value) or made equal to the default length value if there is no date format. In the above example it could have been removed as the calculated values are the ones specified. However, if the table type would have been DOS or FIX, these values could be adjusted to fit the actual field length within the file.

    A CONNECT format string consists of a series of elements that represent a particular piece of information and define its format. The elements are recognized in the order they appear in the format string. Date and time format elements are replaced by the actual date and time as they appear in the source string. They are defined by the following groups of characters:

    Element
    Description

    Usage Notes

    • To match the source string, you can add body text to the format string, enclosing it in single quotes or double quotes if it would be ambiguous. Punctuation marks do not need to be quoted.

    • The hour information is regarded as 12-hour format if a “t” or “tt” element follows the “hh” element in the format or as 24-hour format otherwise.

    • The "MM", "DD", "hh", "mm", "ss" elements can be specified with one or two letters (e.g. "MM" or "M") making no difference on input, but placing a leading zero to one-digit values on output [] for two-letter elements.

    • If the format contains elements DDD or DDDD, the day of week name is skipped on input and ignored to calculate the internal date value. On output, the correct day of week name is generated and displayed.

    Handling dates that are out of the range of supported CONNECT dates

    If you want to make a table containing, for instance, historical dates not being convertible into CONNECT dates, make your column CHAR or VARCHAR and store the dates in the MariaDB format. All date functions applied to these strings will convert them to MariaDB dates and will work as if they were real dates. Of course they must be inserted and are displayed using the MariaDB format.

    NULL handling

    CONNECT handles for data sources able to produce nulls. Currently this concerns mainly the , , MONGO, , , and table types. For INI, , MONGO or XML types, null values are returned when the key is missing in the section (INI) or when the corresponding node does not exist in a row (XML, JSON, MONGO).

    For other file tables, the issue is to define what a null value is. In a numeric column, 0 can sometimes be a valid value but, in some other cases, it can make no sense. The same for character columns; is a blank field a valid value or not?

    A special case is DATE columns with a DATE _FORMAT specified. Any value not matching the format can be regarded as NULL.

    CONNECT leaves the decision to you. When declaring a column in the statement, if it is declared NOT NULL, blank or zero values are considered as valid values. Otherwise they are considered as NULL values. In all cases, nulls are replaced on insert or update by pseudo null values, a zero-length character string for text types or a zero value for numeric types. Once converted to pseudo null values, they are recognized as NULL only for columns declared as nullable.

    For instance:

    The select query replies:

    a
    b

    Sure enough, the value 0 entered on the first row is regarded as NULL for a nullable column. However, if we execute the query:

    This will return no line because a NULL is not equal to 0 in an SQL where clause.

    Now let us see what happens with not null columns:

    The insert statement will produce a warning saying:

    Level
    Code
    Message

    It is replaced by a pseudo null 0 on the fourth row. Let us see the result:

    The first query returns no rows, 0 are valid values and not NULL. The second query replies:

    a
    b

    It shows that the NULL inserted value was replaced by a valid 0 value.

    Unsigned numeric types

    They are supported by CONNECT since version 1.01.0010 for fixed numeric types (TINY, SHORT, INTEGER, and BITINT).

    Data type conversion

    CONNECT is able to convert data from one type to another in most cases. These conversions are done without warning even when this leads to truncation or loss of precision. This is true, in particular, for tables of type ODBC, JDBC, MYSQL and PROXY (via MySQL) because the source table may contain some data types not supported by CONNECT. They are converted when possible to CONNECT types.

    When converted, MariaDB types are converted as:

    MariaDB Types
    CONNECT Type
    Remark

    For , the length of the column is the length of the longest value of the enumeration. For the length is enough to contain all the set values concatenated with comma separator.

    In the case of columns, the handling depends on the values given to the and system variables.

    Note: is currently not converted by default until a TYPE_BIN type is added to CONNECT. However, the FORCE option (from Connect 1.06.006) can be specified for blob columns containing text and the SKIP option also applies to ODBC BLOB columns.

    ODBC SQL types are converted as:

    SQL Types
    Connect Type
    Remark

    JDBC SQL types are converted as:

    JDBC Types
    Connect Type
    Remark

    Note: The SKIP option also applies to ODBC and JDBC tables.

    1. Here input and output are used to specify respectively decoding the date to get its numeric value from the data file and encoding a date to write it in the table file. Input is performed within queries; output is performed in or queries.

    This page is licensed: GPLv2

    CONNECT MONGO Table Type: Accessing Collections from MongoDB

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Classified as a NoSQL database program, MongoDB uses JSON-like documents (BSON) grouped in collections. The MONGO type is used to directly access MongoDB collections as tables.

    Accessing MongDB from CONNECT

    Accessing MongoDB from CONNECT can be done in different ways:

    1. As a MONGO table via the MongoDB C Driver.

    2. As a MONGO table via the MongoDB Java Driver.

    3. As a JDBC table using some commercially available MongoDB JDBC drivers.

    4. As a JSON table via the MongoDB C or Java Driver.

    Using the MongoDB C Driver

    This is currently not available from binary distributions but only for versions compiled from source. The preferred version of the MongoDB C Driver is 1.7, because they provide package recognition. What must be done is:

    1. Install libbson and the MongoDB C Driver 1.7.

    2. Configure, compile and install MariaDB.

    With earlier versions of the Mongo C Driver, the additional include directories and libraries will have to be specified manually when compiling.

    When possible, this is the preferred means of access because it does not require all the Java path settings etc. and is faster than using the Java driver.

    Using the Mongo Java Driver

    This is possible with all distributions including JDBC support, or compiling from source. With a binary distribution that does not enable the MONGO table type, it is possible to access MongoDB using an OEM module. See for details. The only additional things to do are:

    1. Install the MongoDB Java Driver by downloading its jar file. Several versions are available. If possible use the latest version 3 one.

    2. Add the path to it in the CLASSPATH environment variable or in the connect_class_path variable. This is like what is done to declare JDBC drivers.

    Connection is established by new Java wrappers Mongo3Interface and Mongo2Interface. They are available in a JDBC distribution in the Mongo2.jar and Mongo3.jar files (previously JavaWrappers.jar). If version 2 of the Java Driver is used, specify “Version=2” in the option list when creating tables.

    Using JDBC

    See the documentation of the existing commercial JDBC Mongo drivers.

    Using JSON

    See the specific chapter of the JSON Table Type.

    The following describes the MONGO table type.

    CONNECT MONGO Tables

    Creating and running MONGO tables requires a connection to a running local or remote MongoDB server.

    A MONGO table is defined to access a MongoDB collection. The table rows are the collection documents. For instance, to create a table based on the MongoDB sample collection restaurants, you can do something such as the following:

    Note: The used driver is by default the C driver if only the MongoDB C Driver is installed and the Java driver if only the MongoDB Java Driver is installed. If both are available, it can be specified by the DRIVER option to be specified in the option list and defaults to C.

    Here we did not define all the items of the collection documents but only those that are JSON values. The database is test by default. The connection value is the URI used to establish a connection to a local or remote MongoDB server. The value shown in this example corresponds to a local server started with its default port. It is the default connection value for MONGO tables so we could have omit specifying it.

    Using discovery is available. This table could have been created by:

    Here “depth=-1” is used to create only columns that are simple values (no array or object). Without this, with the default value “depth=0” the table had been created as:

    Fixing Problems With mariadb-dump

    In some case or some platforms, when CONNECT is set up for use with JDBC table types, this causes with the --all-databases option to fail.

    This was reported by Robert Dyas who found the cause of it and how to fix it (see ).

    This occurs when the Java JRE “Usage Tracker” is enabled. In that case, Java creates a directory #mysql50#.oracle_jre_usage in the mysql data directory that shows up as a database but cannot be accessed via MySQL Workbench nor apparently backed up by mariadb-dump --all-databases.

    Per the Oracle documentation () the “Usage Tracker” is disabled by default. It is enabled only when creating the properties file /lib/management/usagetracker.properties. This turns out to be WRONG on some platforms as the file does exist by default on a new installation, and the existence of this file enables the usage tracker.

    The solution on CentOS 7 with the Oracle JVM is to rename or delete the usagetracker.properties file (to disable it) and then delete the bogus folder it created in the mysql database directory, then restart.

    For example, the following works:

    In this collection, the address column is a JSON object and the column grades is a JSON array. Unlike the JSON table, just specifying the column name with no Jpath result in displaying the JSON representation of them. For instance:

    name
    address

    MongoDB Dot Notation

    To address the items inside object or arrays, specify the Jpath in MongoDB syntax (if using Discovery, specify the Depth option accordingly):

    From Connect 1.7.0002

    Before Connect 1.7.0002

    If this is not done, the Oracle JVM will start the usage tracker, which will create the hidden folder .oracle_jre_usage in the mysql home directory, which will cause a mariadb-dump of the server to fail.

    name
    street
    score
    date

    MONGO Specific Options

    The MongoDB syntax for Jpath does not allow the CONNECT specific items on arrays. The same effect can still be obtained by a different way. For this, additional options are used when creating MONGO tables.

    Option
    Type
    Description
    • : To be specified in the option list.

    Note: For the content of these options, refer to the MongoDB documentation.

    Colist Option

    Used to pass different options when making the MongoDB cursor used to retrieve the collation documents. One of them is the projection, allowing to limit the items retrieved in documents. It is hardly useful because this limitation is made automatically by CONNECT. However, it can be used when using discovery to eliminate the _id (or another) column when you are not willing to keep it:

    In this example, we added another cursor option, the limit option that works like the limit SQL clause.

    This additional option works only with the C driver. When using the Java driver, colist should be:

    And limit would be specified with select statements.

    Note: When used with a JSON table, to specify the projection list (or ‘all’ to get all columns) makes JPATH to be Connect Json paths, not MongoDB ones, allowing JPATH options not available to MongoDB.

    Filter Option

    This option is used to specify a “filter” that works as a where clause on the table. Supposing we want to create a table restricted to the restaurant making English cuisine that are not located in the Manhattan borough, we can do it by:

    And if we ask:

    This query will return:

    _id
    borough
    name
    restaurant_id

    Pipeline Option

    When this option is specified as true (by YES or 1) the Colist option contains a MongoDB pipeline applying to the table collation. This is a powerful mean for doing things such as expanding arrays like we do with JSON tables. For instance:

    In this pipeline “$match” is an early filter, “$unwind” means that the grades array are expanded (one Document for each array values) and “$project” eliminates the _id and cuisine columns and gives the Jpath for the date, grade and score columns.

    This query replies:

    name
    grade
    score
    date

    This make possible to get things like we do with JSON tables:

    Can be used to get the average score inside the grades array.

    name
    average

    Fullarray Option

    This option, like the Depth option, is only interpreted when creating a table with Discovery (meaning not specifying the columns). It tells CONNECT to generate a column for all existing values in the array. For instance, let us see the MongoDB collection tar by:

    From Connect 1.7.0002

    Before Connect 1.7.0002

    The format ‘*’ indicates we want to see the Json documents. This small collection is:

    Collection

    The Fullarray option can be used here to generate enough columns to see all the prices of the document prices array.

    The table has been created as:

    From Connect 1.7.0002

    Before Connect 1.7.0002

    And is displayed as:

    item
    prices_0
    prices_1
    prices_2
    prices_3
    prices_4

    Create, Read, Update and Delete Operations

    All modifying operations are supported. However, inserting into arrays must be done in a specific way. Like with the Fullarray option, we must have enough columns to specify the array values. For instance, we can create a new table by:

    From Connect 1.7.0002

    Before Connect 1.7.0002

    Now it is possible to populate it by:

    The result are:

    n
    m
    surname
    name
    age
    price_1
    price_2
    price_3

    Note: If the collection does not exist yet when creating the table and inserting in it, MongoDB creates it automatically.

    It can be updated by queries such as:

    To look how the array is generated, let us create another table:

    From Connect 1.7.0002

    Before Connect 1.7.002

    This table is displayed as:

    From Connect 1.7.0002

    n
    name
    prices

    Before Connect 1.7.002

    n
    name
    prices

    Note: This last table can be used to make array calculations like with JSON tables using the JSON UDF functions. For instance:

    This query returns:

    name
    sum_prices
    avg_prices

    Note: When calculating on arrays, null values are ignored.

    Status of MONGO Table Type

    This table type is still under development. It has significant advantages over the JSON type to access MongoDB collections. Firstly, the access being direct, tables are always up to date whether the collection has been modified by another application. Performance wise, it can be faster than JSON, because most processing is done by MongoDB on BSON, its internal representation of JSON data, which is designed to optimize all operations. Note that using the MongoDB C Driver can be faster than using the MongoDB Java Driver.

    Current Restrictions

    • Option “CATFUNC=tables” is not implemented yet.

    • Options SRCDEF and EXECSRC do not apply to MONGO tables.

    This page is licensed: CC BY-SA / Gnu FDL

    CONNECT Table Types - Catalog Tables

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    A catalog table is one that returns information about another table, or data source. It is similar to what MariaDB commands such as DESCRIBE or SHOW do. Applied to local tables, this just duplicates what these commands do, with the noticeable difference that they are tables and can be used inside queries as joined tables or inside sub-selects.

    But their main interest is to enable querying the structure of external tables that cannot be directly queried with description commands. Let's see an example:

    Suppose we want to access the tables from a Microsoft Access database as an ODBC type table. The first information we must obtain is the list of tables existing in this data source. To get it, we will create a catalog table that will return it extracted from the result set of the SQLTables ODBC function:

    The SQLTables function returns a result set having the following columns:

    Field
    Data Type
    Null
    Info Type
    Flag Value

    Note: The Info Type and Flag Value are CONNECT interpretations of this result.

    Here we could have omitted the column definitions of the catalog table or, as in the above example, chose the columns returning the name and type of the tables. If specified, the columns must have the exact name of the corresponding SQLTables result set, or be given a different name with the matching flag value specification.

    (The Table_Type can be TABLE, SYSTEM TABLE, VIEW, etc.)

    For instance, to get the tables we want to use we can ask:

    This will return:

    table_name

    Now we want to create the table to access the CUSTOMERS table. Because CONNECT can retrieve the column description of ODBC tables, it not necessary to specify them in the create table statement:

    However, if we prefer to specify them (to eventually modify them) we must know what the column definitions of that table are. We can get this information with a catalog table. This is how to do it:

    Alternatively it is possible to specify what columns of the catalog table we want:

    To get the column info:

    which results in this table:

    column_name
    type_name
    length
    prec
    nullable

    Now you can create the CUSTOMERS table as:

    Let us explain what we did here: First of all, the creation of the catalog table. This table returns the result set of an ODBC SQLColumns function sent to the ODBC data source. Columns functions always return a data set having some of the following columns, depending on the table type:

    Field
    Data Type
    Null
    Info Type
    Flag Value
    Returned by

    '*': These names have changed since earlier versions of CONNECT.

    Note: ALL includes the ODBC, JDBC, MYSQL, DBF, CSV, PROXY, TBL, XML, JSON, XCOL, and WMI table types. More could be added later.

    We chose among these columns the ones that were useful for our create statement, using the flag value when we gave them a different name (case insensitive).

    The options used in this definition are the same as the one used later for the actual CUSTOMERS data tables except that:

    1. The TABNAME option is mandatory here to specify what the queried table name is.

    2. The CATFUNC option was added both to indicate that this is a catalog table, and to specify that we want column information.

    Note: If the TABNAME option had not been specified, this table would have returned the columns of all the tables defined in the connected data source.

    Currently the available CATFUNC are:

    Function
    Specified as:
    Applies to table types:

    Note: Only the bold part of the function name specification is required.

    The DATASOURCE and DRIVERS functions respectively return the list of available data sources and ODBC drivers available on the system.

    The SQLDataSources function returns a result set having the following columns:

    Field
    Data Type
    Null
    Info Type
    Flag value

    To get the data source, you can do for instance:

    The SQLDrivers function returns a result set having the following columns:

    Field
    Type
    Null
    Info Type
    Flag value

    You can get the driver list with:

    Another example, WMI table

    To create a catalog table returning the attribute names of a WMI class, use the same table options as the ones used with the normal WMI table plus the additional option ‘catfunc=columns’. If specified, the columns of such a catalog table can be chosen among the following:

    Name
    Type
    Flag
    Description

    If you wish to use a different name for a column, set the Flag column option.

    For example, before creating the "csprod" table, you could have created the info table:

    Now the query:

    will display the result:

    Column_name
    Data_Type
    Type_name
    Length
    Prec

    This can help to define the columns of the matching normal table.

    Note 1: The column length, for the Info table as well as for the normal table, can be chosen arbitrarily, it just must be enough to contain the returned information.

    Note 2: The Scale column returns 1 for text columns (meaning case insensitive); 2 for float and double columns; and 0 for other numeric columns.

    Catalog Table result size limit

    Because catalog tables are processed like the information retrieved by “Discovery” when table columns are not specified in a Create Table statement, their result set is entirely retrieved and memory allocated.

    By default, this allocation is done for a maximum return line number of:

    Catfunc
    Max lines

    When the number of lines retrieved for a table is more than this maximum, a warning is issued by CONNECT. This is mainly prone to occur with columns (and also tables) with some data sources having many tables when the table name is not specified.

    If this happens, it is possible to increase the default limit using the MAXRES option, for instance:

    Indeed, because the entire table result is memorized before the query is executed; the returned value would be limited even on a query such as:

    This page is licensed: GPLv2

    InnoDB Page Compression

    This feature enables transparent page-level compression for tables using algorithms like LZ4 or Zlib, reducing storage requirements.

    Overview

    InnoDB page compression provides a way to compress InnoDB tables.

    Use Cases

    Riviera Caterer

    Stillwell Avenue

    5

    10/06/2014

    Tov Kosher Kitchen

    63 Road

    20

    24/11/2014

    Driver*

    String

    C or Java.

    Version*

    Integer

    The Java Driver version (defaults to 3)

    58ada47ee5a51ddfcd5f176e

    Brooklyn

    Dear Bushwick

    41690534

    58ada47ee5a51ddfcd5f1e91

    Queens

    Snowdonia Pub

    50000290

    NULL

    NULL

    1515

    Hello

    John

    Smith

    32

    65,17

    98,12

    NULL

    2014

    Coucou

    Foo

    Bar

    20

    -1

    74

    81356

    Morris Park Bake Shop

    {"building":"1007","coord":[-73.8561,40.8484], "street":"Morris ParkAve", "zipcode":"10462"}

    Wendy'S

    {"building":"469","coord":[-73.9617,40.6629], "street":"Flatbush Avenue", "zipcode":"11225"}

    Reynolds Restaurant

    {"building":"351","coord":[-73.9851,40.7677], "street":"West 57Street", "zipcode":"10019"}

    Morris Park Bake Shop

    Morris Park Ave

    2

    03/03/2014

    Wendy'S

    Flatbush Avenue

    8

    30/12/2014

    Dj Reynolds Pub And Restaurant

    West 57 Street

    2

    Colist

    String

    Options to pass to the MongoDB cursor.

    Filter

    String

    Query used by the MongoDB cursor.

    Pipeline*

    Boolean

    If True, Colist is a pipeline.

    Fullarray*

    Boolean

    Used when creating with Discovery.

    58ada47de5a51ddfcd5ee1f3

    Brooklyn

    The Park Slope Chipshop

    40816202

    58ada47de5a51ddfcd5ee999

    Brooklyn

    Chip Shop

    41076583

    58ada47ee5a51ddfcd5f13d5

    Brooklyn

    The Monro

    Bistro Sk

    A

    10

    21/11/2014 01:00:00

    Bistro Sk

    A

    12

    19/02/2014 01:00:00

    Bistro Sk

    B

    18

    Bouley Botanical

    25,0000

    Cheri

    46,0000

    Graine De Paris

    30,0000

    Le Pescadeux

    29,7500

    {"_id":{"$oid":"58f63a5099b37d9c930f9f3b"},"item":"journal","prices":[87.0,45.0,63.0,12.0,78.0]}

    {"_id":{"$oid":"58f63a5099b37d9c930f9f3c"},"item":"notebook","prices":[123.0,456.0,789.0]}

    journal

    87.00

    45.00

    63.00

    12.00

    78.00

    notebook

    123.00

    456.00

    1789

    Welcome

    Olivier

    Bertrand

    56

    3,14

    2,36

    8,45

    1789

    Olivier

    [3.1400000000000001243,2.3599999999999998757,8.4499999999999992895]

    1515

    John

    [65.170000000000001705,98.120000000000004547,null]

    2014

    Foo

    [null,74.0,83.359999999999999432]

    1789

    Olivier

    [3.14, 2.36, 8.45]

    1515

    John

    [65.17, 98.12]

    2014

    Foo

    [, 74.0, 83.36]

    Olivier

    13.95

    4.65

    John

    163.29

    81.64

    Foo

    157,36

    78.68

    CONNECT OEM Table Example
    mariadb-dump
    MDEV-11238

    06/09/2014

    41660253

    12/06/2013 02:00:00

    789.00

    char(128)

    NO

    FLD_NAME

    1

    Table_Type

    char(16)

    NO

    FLD_TYPE

    2

    Remark

    char(128)

    NO

    FLD_REM

    5

    VARCHAR

    30

    0

    1

    ContactTitle

    VARCHAR

    30

    0

    1

    Address

    VARCHAR

    60

    0

    1

    City

    VARCHAR

    15

    0

    1

    Region

    VARCHAR

    15

    0

    1

    PostalCode

    VARCHAR

    10

    0

    1

    Country

    VARCHAR

    15

    0

    1

    Phone

    VARCHAR

    24

    0

    1

    Fax

    VARCHAR

    24

    0

    1

    18

    ODBC, JDBC

    Table_Name

    char(128)

    NO

    FLD_TABNAME

    19

    ODBC, JDBC

    Column_Name

    char(128)

    NO

    FLD_NAME

    1

    ALL

    Data_Type

    smallint(6)

    NO

    FLD_TYPE

    2

    ALL

    Type_Name

    char(30)

    NO

    FLD_TYPENAME

    3

    ALL

    Column_Size*

    int(10)

    NO

    FLD_PREC

    4

    ALL

    Buffer_Length*

    int(10)

    NO

    FLD_LENGTH

    5

    ALL

    Decimal_Digits*

    smallint(6)

    NO

    FLD_SCALE

    6

    ALL

    Radix

    smallint(6)

    NO

    FLD_RADIX

    7

    ODBC, JDBC, MYSQL

    Nullable

    smallint(6)

    NO

    FLD_NULL

    8

    ODBC, JDBC, MYSQL

    Remarks

    char(255)

    NO

    FLD_REM

    9

    ODBC, JDBC, MYSQL

    Collation

    char(32)

    NO

    FLD_CHARSET

    10

    MYSQL

    Key

    char(4)

    NO

    FLD_KEY

    11

    MYSQL

    Default_value

    N.A.

    FLD_DEFAULT

    12

    Privilege

    N.A.

    FLD_PRIV

    13

    Date_fmt

    char(32)

    NO

    FLD_DATEFMT

    15

    MYSQL

    Xpath/Jpath

    Varchar(256)

    NO

    FLD_FORMAT

    16

    XML/JSON

    Column_Size

    INT

    4

    The field length in characters

    Buffer_Length

    INT

    5

    Depends on the coding

    Scale

    INT

    6

    Depends on the type

    1

    CHAR

    255

    1

    Name

    1

    CHAR

    255

    1

    SKUNumber

    1

    CHAR

    255

    1

    UUID

    1

    CHAR

    255

    1

    Vendor

    1

    CHAR

    255

    1

    Version

    1

    CHAR

    255

    1

    Table_Cat

    char(128)

    NO

    FLD_CAT

    17

    Table_Name

    char(128)

    NO

    FLD_SCHEM

    18

    Categories

    Customers

    Employees

    Products

    Shippers

    Suppliers

    CustomerID

    VARCHAR

    5

    0

    1

    CompanyName

    VARCHAR

    40

    0

    1

    Table_Cat*

    char(128)

    NO

    FLD_CAT

    17

    ODBC, JDBC

    Table_Schema*

    char(128)

    NO

    FNC_TAB

    tables

    ODBC, JDBC, MYSQL

    FNC_COL

    columns

    ODBC, JDBC, MYSQL, DBF, CSV, PROXY, XCOL, TBL, WMI

    FNC_DSN

    datasourcesdsnsqldatasources

    ODBC

    FNC_DRIVER

    driverssqldrivers

    Name

    varchar(256)

    NO

    FLD_NAME

    1

    Description

    varchar(256)

    NO

    FLD_REM

    9

    Description

    varchar(128)

    YES

    FLD_NAME

    1

    Attributes

    varchar(256)

    YES

    FLD_REM

    9

    Column_Name

    CHAR

    1

    The name of the property

    Data_Type

    INT

    2

    The SQL data type

    Type_Name

    CHAR

    3

    Caption

    1

    CHAR

    255

    1

    Description

    1

    CHAR

    255

    1

    Drivers

    256

    Data Sources

    512

    Columns

    20,000

    Tables

    10,000

    Table_Name

    ContactName

    FLD_SCEM

    ODBC, JDBC

    The SQL type name

    IdentifyingNumber

    CREATE TABLE resto (
    _id VARCHAR(24) NOT NULL,
    name VARCHAR(64) NOT NULL,
    cuisine CHAR(200) NOT NULL,
    borough CHAR(16) NOT NULL,
    restaurant_id VARCHAR(12) NOT NULL)
    ENGINE=CONNECT table_type=MONGO tabname='restaurants'
    data_charset=utf8 CONNECTION='mongodb://localhost:27017';
    CREATE TABLE resto
    ENGINE=CONNECT table_type=MONGO tabname='restaurants'
    data_charset=utf8 option_list='level=-1';
    CREATE TABLE `resto` (
      `_id` CHAR(24) NOT NULL,
      `address` VARCHAR(136) NOT NULL,
      `borough` CHAR(13) NOT NULL,
      `cuisine` CHAR(64) NOT NULL,
      `grades` VARCHAR(638) NOT NULL,
      `name` CHAR(98) NOT NULL,
      `restaurant_id` CHAR(8) NOT NULL
    ) ENGINE=CONNECT DEFAULT CHARSET=latin1 `TABLE_TYPE`='MONGO' `TABNAME`='restaurants' `DATA_CHARSET`='utf8';
    sudo mv /usr/java/default/jre/lib/management/management.properties /usr/java/default/jre/lib/management/management.properties.TRACKER-OFF
    sudo reboot
    sudo rm -rf  /var/lib/mysql/.oracle_jre_usage
    sudo reboot
    SELECT name, address FROM resto LIMIT 3;
    CREATE TABLE newresto (
    _id VARCHAR(24) NOT NULL,
    name VARCHAR(64) NOT NULL,
    cuisine CHAR(200) NOT NULL,
    borough CHAR(16) NOT NULL,
    street VARCHAR(65) jpath='address.street',
    building CHAR(16) jpath='address.building',
    zipcode CHAR(5) jpath='address.zipcode',
    grade CHAR(1) jpath='grades.0.grade',
    score INT(4) NOT NULL jpath='grades.0.score', 
    `date` DATE jpath='grades.0.date',
    restaurant_id VARCHAR(255) NOT NULL)
    ENGINE=CONNECT table_type=MONGO tabname='restaurants'
    data_charset=utf8 CONNECTION='mongodb://localhost:27017';
    CREATE TABLE newresto (
    _id VARCHAR(24) NOT NULL,
    name VARCHAR(64) NOT NULL,
    cuisine CHAR(200) NOT NULL,
    borough CHAR(16) NOT NULL,
    street VARCHAR(65) field_format='address.street',
    building CHAR(16) field_format='address.building',
    zipcode CHAR(5) field_format='address.zipcode',
    grade CHAR(1) field_format='grades.0.grade',
    score INT(4) NOT NULL field_format='grades.0.score', 
    `date` DATE field_format='grades.0.date',
    restaurant_id VARCHAR(255) NOT NULL)
    ENGINE=CONNECT table_type=MONGO tabname='restaurants'
    data_charset=utf8 CONNECTION='mongodb://localhost:27017';
    SELECT name, street, score, DATE FROM newresto LIMIT 5;
    CREATE TABLE restest
    ENGINE=CONNECT table_type=MONGO tabname='restaurants'
    data_charset=utf8 option_list='depth=-1'
    colist='{"projection":{"_id":0},"limit":5}';
    colist='{"_id":0}';
    CREATE TABLE english
    ENGINE=CONNECT table_type=MONGO tabname='restaurants'
    data_charset=utf8
    colist='{"projection":{"cuisine":0}}'
    filter='{"cuisine":"English","borough":{"$ne":"Manhattan"}}'
    option_list='Depth=-1';
    SELECT * FROM english;
    CREATE TABLE resto2 (
    name VARCHAR(64) NOT NULL,
    borough CHAR(16) NOT NULL,
    DATE DATETIME NOT NULL,
    grade CHAR(1) NOT NULL,
    score INT(4) NOT NULL)
    ENGINE=CONNECT table_type=MONGO tabname='restaurants' data_charset=utf8
    colist='{"pipeline":[{"$match":{"cuisine":"French"}},{"$unwind":"$grades"},{"$project":{"_id":0,"name":1,"borough":1,"date":"$grades.date","grade":"$grades.grade","score":"$grades.score"}}]}'
    option_list='Pipeline=1';
    SELECT name, grade, score, DATE FROM resto2
    WHERE borough = 'Bronx';
    SELECT name, AVG(score) average FROM resto2
    GROUP BY name HAVING average >= 25;
    CREATE TABLE seetar (
    Collection VARCHAR(300) NOT NULL jpath='*')
    ENGINE=CONNECT table_type=MONGO tabname=tar;
    CREATE TABLE seetar (
    Collection VARCHAR(300) NOT NULL field_format='*')
    ENGINE=CONNECT table_type=MONGO tabname=tar;
    CREATE TABLE tar
    ENGINE=CONNECT table_type=MONGO
    colist='{"projection":{"_id":0}}'
    option_list='depth=1,Fullarray=YES';
    CREATE TABLE `tar` (
      `item` CHAR(8) NOT NULL,
      `prices_0` DOUBLE(12,6) NOT NULL `JPATH`='prices.0',
      `prices_1` DOUBLE(12,6) NOT NULL `JPATH`='prices.1',
      `prices_2` DOUBLE(12,6) NOT NULL `JPATH`='prices.2',
      `prices_3` DOUBLE(12,6) DEFAULT NULL `JPATH`='prices.3',
      `prices_4` DOUBLE(12,6) DEFAULT NULL `JPATH`='prices.4'
    ) ENGINE=CONNECT DEFAULT CHARSET=latin1 `TABLE_TYPE`='MONGO' `COLIST`='{"projection":{"_id":0}}' `OPTION_LIST`='depth=1,Fullarray=YES';
    CREATE TABLE `tar` (
      `item` CHAR(8) NOT NULL,
      `prices_0` DOUBLE(12,6) NOT NULL `FIELD_FORMAT`='prices.0',
      `prices_1` DOUBLE(12,6) NOT NULL `FIELD_FORMAT`='prices.1',
      `prices_2` DOUBLE(12,6) NOT NULL `FIELD_FORMAT`='prices.2',
      `prices_3` DOUBLE(12,6) DEFAULT NULL `FIELD_FORMAT`='prices.3',
      `prices_4` DOUBLE(12,6) DEFAULT NULL `FIELD_FORMAT`='prices.4'
    ) ENGINE=CONNECT DEFAULT CHARSET=latin1 `TABLE_TYPE`='MONGO' `COLIST`='{"projection":{"_id":0}}' `OPTION_LIST`='level=1,Fullarray=YES';
    CREATE TABLE testin (
    n INT NOT NULL,
    m CHAR(12) NOT NULL,
    surname CHAR(16) NOT NULL jpath='person.name.first',
    name CHAR(16) NOT NULL jpath='person.name.last',
    age INT(3) NOT NULL jpath='person.age',
    price_1 DOUBLE(8,2) jpath='d.0',
    price_2 DOUBLE(8,2) jpath='d.1',
    price_3 DOUBLE(8,2) jpath='d.2')
    ENGINE=CONNECT table_type=MONGO tabname='tin'
    CONNECTION='mongodb://localhost:27017';
    CREATE TABLE testin (
    n INT NOT NULL,
    m CHAR(12) NOT NULL,
    surname CHAR(16) NOT NULL field_format='person.name.first',
    name CHAR(16) NOT NULL field_format='person.name.last',
    age INT(3) NOT NULL field_format='person.age',
    price_1 DOUBLE(8,2) field_format='d.0',
    price_2 DOUBLE(8,2) field_format='d.1',
    price_3 DOUBLE(8,2) field_format='d.2')
    ENGINE=CONNECT table_type=MONGO tabname='tin'
    CONNECTION='mongodb://localhost:27017';
    INSERT INTO testin VALUES
    (1789, 'Welcome', 'Olivier','Bertrand',56, 3.14, 2.36, 8.45),
    (1515, 'Hello', 'John','Smith',32, 65.17, 98.12, NULL),
    (2014, 'Coucou', 'Foo','Bar',20, -1.0, 74, 81356);
    UPDATE tintin SET price_3 = 83.36 WHERE n = 2014;
    CREATE TABLE tintin (
    n INT NOT NULL,
    name CHAR(16) NOT NULL jpath='person.name.first',
    prices VARCHAR(255) jpath='d')
    ENGINE=CONNECT table_type=MONGO tabname='tin';
    CREATE TABLE tintin (
    n INT NOT NULL,
    name CHAR(16) NOT NULL field_format='person.name.first',
    prices VARCHAR(255) field_format='d')
    ENGINE=CONNECT table_type=MONGO tabname='in';
    SELECT name, jsonget_real(prices,'[+]') sum_prices, jsonget_real(prices,'[!]') avg_prices FROM tintin;
    CREATE TABLE tabinfo (
      table_name VARCHAR(128) NOT NULL,
      table_type VARCHAR(16) NOT NULL)
    ENGINE=CONNECT table_type=ODBC catfunc=TABLES
    CONNECTION='DSN=MS Access Database;DBQ=C:/Program
    Files/Microsoft Office/Office/1033/FPNWIND.MDB;';
    SELECT TABLE_NAME FROM tabinfo WHERE table_type = 'TABLE';
    CREATE TABLE Customers ENGINE=CONNECT table_type=ODBC
    CONNECTION='DSN=MS Access Database;DBQ=C:/Program
    Files/Microsoft Office/Office/1033/FPNWIND.MDB;';
    CREATE TABLE custinfo ENGINE=CONNECT table_type=ODBC
    tabname=customers catfunc=columns
    CONNECTION='DSN=MS Access Database;DBQ=C:/Program
    Files/Microsoft Office/Office/1033/FPNWIND.MDB;';
    CREATE TABLE custinfo (
      column_name CHAR(128) NOT NULL,
      type_name CHAR(20) NOT NULL,
      LENGTH INT(10) NOT NULL flag=7,
      prec SMALLINT(6) NOT NULL flag=9)
      NULLABLE SMALLINT(6) NOT NULL)
    ENGINE=CONNECT table_type=ODBC tabname=customers
    catfunc=columns
    CONNECTION='DSN=MS Access Database;DBQ=C:/Program
    Files/Microsoft Office/Office/1033/FPNWIND.MDB;';
    SELECT * FROM custinfo;
    CREATE TABLE Customers (
      CustomerID VARCHAR(5),
      CompanyName VARCHAR(40),
      ContactName VARCHAR(30),
      ContactTitle VARCHAR(30),
      Address VARCHAR(60),
      City VARCHAR(15),
      Region VARCHAR(15),
      PostalCode VARCHAR(10),
      Country VARCHAR(15),
      Phone VARCHAR(24),
      Fax VARCHAR(24))
    ENGINE=CONNECT table_type=ODBC block_size=10
    CONNECTION='DSN=MS Access Database;DBQ=C:/Program
    Files/Microsoft Office/Office/1033/FPNWIND.MDB;';
    CREATE TABLE datasources (
    ENGINE=CONNECT table_type=ODBC catfunc=DSN;
    CREATE TABLE drivers
    ENGINE=CONNECT table_type=ODBC catfunc=drivers;
    CREATE TABLE CSPRODCOL (
      Column_name CHAR(64) NOT NULL,
      Data_Type INT(3) NOT NULL,
      Type_name CHAR(16) NOT NULL,
      LENGTH INT(6) NOT NULL,
      Prec INT(2) NOT NULL flag=6)
    ENGINE=CONNECT table_type='WMI' catfunc=col;
    SELECT * FROM csprodcol;
    CREATE TABLE allcols ENGINE=CONNECT table_type=odbc
    CONNECTION='DSN=ORACLE_TEST;UID=system;PWD=manager'
    option_list='Maxres=110000' catfunc=columns;
    SELECT COUNT(*) FROM allcols;

    , , real

    TYPE_DECIM

    Numeric value

    , numeric, number

    TYPE_DATE

    4 bytes integer

    , , , ,

    DDD

    The three-character weekday abbreviation.

    DDDD

    The full weekday name.

    hh

    The one or two-digit hour in 12-hour or 24-hour format.

    mm

    The one or two-digit minute.

    ss

    The one or two-digit second.

    t

    The one-letter AM/PM abbreviation (that is, AM is entered as "A").

    tt

    The two-letter AM/PM abbreviation (that is, AM is entered as "AM").

  • Temporal values are always stored as numeric in BIN and VEC tables.

  • , , real

    TYPE_DOUBLE

    8 byte floating point

    , numeric

    TYPE_DECIM

    Length depends on precision and scale

    all related types

    TYPE_DATE

    Date format can be set accordingly

    , longlong

    TYPE_BIGINT

    8 byte integer

    ,

    TYPE_STRING

    Numeric value not accessible

    All text types

    TYPE_STRING TYPE_ERROR

    Depending on the value of the system variable value.

    Other types

    TYPE_ERROR

    Not supported, no conversion provided.

    SQL_SMALLINT

    TYPE_SHORT

    SQL_TINYINT, SQL_BIT

    TYPE_TINY

    SQL_FLOAT, SQL_REAL, SQL_DOUBLE

    TYPE_DOUBLE

    SQL_DATETIME

    TYPE_DATE

    len = 10

    SQL_INTERVAL

    TYPE_STRING

    len = 8 + ((scale) ? (scale+1) : 0)

    SQL_TIMESTAMP

    TYPE_DATE

    len = 19 + ((scale) ? (scale +1) : 0)

    SQL_BIGINT

    TYPE_BIGINT

    SQL_GUID

    TYPE_STRING

    llen=36

    SQL_BINARY, SQL_VARBINARY, SQL_LONG-VARBINARY

    TYPE_STRING

    len = min(abs(len), connect_conv_size) Only if the value of is force. The column should use the binary charset.

    Other types

    TYPE_ERROR

    Not supported.

    SMALLINT

    TYPE_SHORT

    TINYINT, BIT

    TYPE_TINY

    FLOAT, REAL, DOUBLE

    TYPE_DOUBLE

    DATE

    TYPE_DATE

    len = 10

    TIME

    TYPE_DATE

    len = 8 + ((scale) ? (scale+1) : 0)

    TIMESTAMP

    TYPE_DATE

    len = 19 + ((scale) ? (scale +1) : 0)

    BIGINT

    TYPE_BIGINT

    UUID (specific to PostgreSQL)

    TYPE_STRINGTYPE_ERROR

    len=36If

    Other types

    TYPE_ERROR

    Not supported.

    TYPE_STRING

    Zero ended string

    char, varchar, text

    TYPE_INT

    4 bytes integer

    int, mediumint, integer

    TYPE_SHORT

    2 bytes integer

    smallint

    TYPE_TINY

    1 byte integer

    tinyint

    TYPE_BIGINT

    8 bytes integer

    bigint, longlong

    TYPE_DOUBLE

    Charlie

    2012-11-12

    15:30:00

    YY

    The last two digits of the year (that is, 1996 would be coded as "96").

    YYYY

    The full year (that is, 1996 could be entered as "96" but displayed as “1996”).

    MM

    The one or two-digit month number.

    MMM

    The three-character month abbreviation.

    MMMM

    The full month name.

    DD

    The one or two-digit month day.

    NULL

    zero

    NULL

    ???

    Warning

    1048

    Column 'a' cannot be null

    0

    zero

    0

    ???

    integer, medium integer

    TYPE_INT

    4 byte integer

    small integer

    TYPE_SHORT

    2 byte integer

    tiny integer

    TYPE_TINY

    1 byte integer

    char, varchar

    TYPE_STRING

    SQL_CHAR, SQL_VARCHAR

    TYPE_STRING

    SQL_LONGVARCHAR

    TYPE_STRING

    len = min(abs(len), connect_conv_size) If the column is generated by discovery (columns not specified) its length is connect_conv_size.

    SQL_NUMERIC, SQL_DECIMAL

    TYPE_DECIM

    SQL_INTEGER

    TYPE_INT

    (N)CHAR, (N)VARCHAR

    TYPE_STRING

    LONG(N)VARCHAR

    TYPE_STRING

    len = min(abs(len), connect_conv_size) If the column is generated by discovery (columns not specified), its length is connect_conv_size

    NUMERIC, DECIMAL, VARBINARY

    TYPE_DECIM

    INTEGER

    TYPE_INT

    CHAR
    VARCHAR
    INTEGER
    integer numeric 2-byte
    integer numeric 1-byte
    BIGINT
    double
    DECIMAL
    UTC
    1
    null values
    ODBC
    JDBC
    MYSQL
    XML
    JSON
    INI
    JSON
    CREATE TABLE
    ENUM
    SET
    TEXT
    connect_type_conv
    connect_conv_size
    BLOB
    connect_type_conv
    ↑
    SELECT
    UPDATE
    INSERT

    8 bytes floating point

    Same length

  • InnoDB page compression can be used on any storage device and any file system.

  • InnoDB page compression is most efficient on file systems that support sparse files. See Saving Storage Space with Sparse Files for more information.

  • InnoDB page compression is most beneficial on solid state drives (SSDs) and other flash storage. See Optimized for Flash Storage for more information.

  • InnoDB page compression performs best when your storage device and file system support atomic writes, since that allows the InnoDB doublewrite buffer to be disabled. See Atomic Write Support for more information.

  • Comparison with the COMPRESSED Row Format

    InnoDB page compression is a modern way to compress your InnoDB tables. It is similar to InnoDB's COMPRESSED row format, but it has many advantages. Some of the differences are:

    • With InnoDB page compression, compressed pages are immediately decompressed after being read from the tablespace file, and only uncompressed pages are stored in the buffer pool. In contrast, with InnoDB's COMPRESSED row format, compressed pages are decompressed immediately after they are read from the tablespace file, and both the uncompressed and compressed pages are stored in the buffer pool. This means that the COMPRESSED row format uses more space in the buffer pool than InnoDB page compression does.

    • With InnoDB page compression, pages are compressed just before being written to the tablespace file. In contrast, with InnoDB's COMPRESSED row format, pages are re-compressed immediately after any changes, and the compressed pages are stored in the buffer pool alongside the uncompressed pages. These changes are then occasionally flushed to disk. This means that the COMPRESSED row format re-compresses data more frequently than InnoDB page compression does.

    • With InnoDB page compression, multiple compression algorithms are supported. In contrast, with InnoDB's COMPRESSED row format, is the only supported compression algorithm. This means that the row format has less compression options than InnoDB page compression does.

    In general, InnoDB page compression is superior to the COMPRESSED row format.

    Comparison with Storage Engine-Independent Column Compression

    • See Storage Engine-Independent Column Compression - Comparison with InnoDB Page Compression.

    Configuring the InnoDB Page Compression Algorithm

    There is not currently a table option to set different InnoDB page compression algorithms for individual tables.

    However, the server-wide InnoDB page compression algorithm can be configured by setting the innodb_compression_algorithm system variable.

    When this system variable is changed, the InnoDB page compression algorithm does not change for existing pages that were already compressed with a different InnoDB page compression algorithm. InnoDB is able to handle this situation without issues, because every page in an InnoDB tablespace contains metadata about the InnoDB page compression algorithm in the page header. This means that InnoDB supports having uncompressed pages and pages compressed with different InnoDB page compression algorithms in the same InnoDB tablespace at the same time.

    This system variable can be set to one of the following values:

    System Variable Value
    Description

    none

    Pages are not compressed. This is the default value in and before, and and before.

    zlib

    Pages are compressed using the bundled compression algorithm. This is the default value in and later, and and later.

    lz4

    Pages are compressed using the compression algorithm.

    lzo

    Pages are compressed using the compression algorithm.

    lzma

    Pages are compressed using the compression algorithm.

    bzip2

    Pages are compressed using the compression algorithm.

    However, on many distributions, the standard MariaDB builds do not support all InnoDB page compression algorithms by default. From , algorithms can be installed as a plugin.

    This system variable can be changed dynamically with SET GLOBAL:

    This system variable can also be set in a server option group in an option file prior to starting up the server:

    Checking Supported InnoDB Page Compression Algorithms

    On many distributions, the standard MariaDB builds do not support all InnoDB page compression algorithms by default. Therefore, if you want to use a specific InnoDB page compression algorithm, then you should check whether your MariaDB build supports it.

    The zlib compression algorithm is always supported. From , algorithms can be installed as a plugin.

    A MariaDB build's support for other InnoDB page compression algorithms can be checked by querying the following status variables with SHOW GLOBAL STATUS:

    Status Variable
    Description

    Whether InnoDB supports the compression algorithm.

    Whether InnoDB supports the compression algorithm.

    Whether InnoDB supports the compression algorithm.

    Whether InnoDB supports the compression algorithm.

    Whether InnoDB supports the compression algorithm.

    For example:

    Adding Support for an InnoDB Page Compression Algorithm

    On many distributions, the standard MariaDB builds do not support all InnoDB page compression algorithms by default. From , algorithms can be installed as a plugin, but in earlier versions, if you want to use certain InnoDB page compression algorithms, then you may need to do the following:

    • Download the package for the desired compression library from the above links.

    • Install the package for the desired compression library.

    • Compile MariaDB from the source distribution.

    The general steps for compiling MariaDB are:

    • Download and unpack the source code distribution:

    • Configure the build using cmake:

    • Check CMakeCache.txt to confirm that it has found the desired compression library on your system.

    • Compile the build:

    • Either install the build:

    Or make a package to install:

    See Compiling MariaDB From Source for more information.

    Enabling InnoDB Page Compression

    InnoDB page compression is not enabled by default. However, InnoDB page compression can be enabled for just individual InnoDB tables or it can be enabled for all new InnoDB tables by default.

    InnoDB page compression is also only supported if the InnoDB table is in a file per-table tablespace. Therefore, the innodb_file_per_table system variable must be set to ON to use InnoDB page compression.

    InnoDB page compression is only supported if the InnoDB table uses the Barracuda file format.Therefore, in and before, the innodb_file_format system variable must be set to Barracuda to use InnoDB page compression.

    InnoDB page compression is also only supported if the InnoDB table's row format is COMPACT or DYNAMIC.

    Enabling InnoDB Page Compression by Default

    In and later, InnoDB page compression can be enabled for all new InnoDB tables by default by setting the innodb_compression_default system variable to ON.

    This system variable can be set to one of the following values:

    System Variable Value
    Description

    OFF

    New InnoDB tables do not use InnoDB page compression. This is the default value.

    ON

    New InnoDB tables use InnoDB page compression.

    This system variable can be changed dynamically with SET GLOBAL:

    This system variable's session value can be changed dynamically with SET SESSION:

    This system variable can also be set in a server option group in an option file prior to starting up the server:

    Enabling InnoDB Page Compression for Individual Tables

    InnoDB page compression can be enabled for individual tables by setting the PAGE_COMPRESSED table option to 1:

    Configuring the Compression Level

    Some InnoDB page compression algorithms support a compression level option, which configures how the InnoDB page compression algorithm will balance speed and compression.

    The compression level's supported values range from 1 to 9. The range goes from the fastest to the most compact, which means that 1 is the fastest and 9 is the most compact.

    Only the following InnoDB page compression algorithms currently support compression levels:

    • zlib

    • lzma

    If an InnoDB page compression algorithm does not support compression levels, then it ignores any provided compression level value.

    Configuring the Default Compression Level

    The default compression level can be configured by setting the innodb_compression_level system variable.

    This system variable's default value is 6.

    This system variable can be changed dynamically with SET GLOBAL:

    This system variable can also be set in a server option group in an option file prior to starting up the server:

    Configuring the Compression Level for Individual Tables

    The compression level for individual tables can also be configured by setting the PAGE_COMPRESSION_LEVEL table option for the table:

    Configuring the Failure Threshold and Maximum Padding

    InnoDB page compression can encounter compression failures.

    InnoDB page compression's failure threshold can be configured. If InnoDB encounters more compression failures than the failure threshold, then it pads pages with zeroed out bytes before attempting to compress them as a way to reduce failures. If the failure rate stays above the failure threshold, then InnoDB pads pages with more zeroed out bytes in 128 byte increments.

    InnoDB page compression's maximum padding can also be configured.

    Configuring the Failure Threshold

    The failure threshold can be configured by setting the innodb_compression_failure_threshold_pct system variable.

    This system variable's supported values range from 0 to 100.

    This system variable's default value is 5.

    This system variable can be changed dynamically with SET GLOBAL:

    This system variable can also be set in a server option group in an option file prior to starting up the server:

    Configuring the Maximum Padding

    The maximum padding can be configured by setting the innodb_compression_pad_pct_max system variable.

    This system variable's supported values range from 0 to 75.

    This system variable's default value is 50.

    This system variable can be changed dynamically with SET GLOBAL:

    This system variable can also be set in a server option group in an option file prior to starting up the server:

    Saving Storage Space with Sparse Files

    When InnoDB page compression is used, InnoDB may still write the compressed page to the tablespace file with the original size of the uncompressed page, which would be equivalent to the value of the innodb_page_size system variable. This is done by design, because when InnoDB's I/O code needs to read the page from disk, it can only read the full page size. However, this is obviously not optimal.

    On file systems that support sparse files, this problem is solved by writing the tablespace file as a sparse file using the punch hole technique. With the punch hole technique, InnoDB will only write the actual compressed page size to the tablespace file, aligned to sector size. The rest of the page is trimmed.

    This punch hole technique allows InnoDB to read the compressed page from disk as the full page size, even though the compressed page really takes up less space on the file system.

    There are some potential disadvantages to using sparse files:

    • Some utilities may require special options in order to handle sparse files in an efficient manner.

    • Most existing file systems are slow to unlink() sparse files. As a consequence, if a tablespace file is a sparse file, then dropping the table can be very slow.

    Sparse File Support on Linux

    On Linux, the following file systems support sparse files:

    • ext3

    • ext4

    • xfs

    • btrfs

    • nvmfs

    On Linux, file systems need to support the fallocate() system call with the FALLOC_FL_PUNCH_HOLE and FALLOC_FL_KEEP_SIZE flags:

    Some Linux utilities may require special options in order to work with sparse files efficiently:

    • The ls utility will report the non-sparse size of the tablespace file when executed with default behavior, but ls -s will report the actual amount of storage allocated for the tablespace file.

    • The cp utility is pretty good at auto-detecting sparse files, but it also provides the cp --sparse=always and cp --sparse=never options, if the auto-detection is not desired.

    • The tar utility will archive sparse files with their non-sparse size when executed with default behavior, but tar --sparse will auto-detect sparse files, and archive them with their sparse size.

    Sparse File Support on Windows

    On Windows, the following file systems support sparse files:

    • NTFS

    On Windows, file systems need to support the DeviceIoControl() function with the FSCTL_SET_SPARSE and FSCTL_SET_ZERO_DATA control codes:

    Configuring InnoDB to use Sparse Files

    In and later, InnoDB uses the punch hole technique to create sparse files used automatically when the underlying file system supports sparse files.

    In and before, InnoDB can be configured to use the punch hole technique to create sparse files by configuring the innodb_use_trim and innodb_use_fallocate system variables. These system variables can be set in a server option group in an option file prior to starting up the server:

    Optimized for Flash Storage

    InnoDB page compression was designed to be optimized on solid state drives (SSDs) and other flash storage.

    InnoDB page compression was originally developed by collaborating with Fusion-io. As a consequence, it was originally designed to work best on FusionIO devices using NVMFS. Fusion-io has since been acquired by Western Digital, and they have decided not to continue supporting NVMFS.

    However, InnoDB page compression is still likely to be most optimized on solid state drives (SSDs) and other flash storage.

    InnoDB page compression works without any issues on hard disk drives (HDDs). However, since its compression relies on the use of sparse files, the data may be somewhat fragmented on disk. This fragmentation may hurt performance on HDDs, since they handle random reads and writes much more slowly than flash storage.

    Configuring InnoDB Page Flushing

    With InnoDB page compression, pages are compressed when they are flushed to disk. Therefore, it can be helpful to optimize the configuration of InnoDB's page flushing. See InnoDB Page Flushing for more information.

    Monitoring InnoDB Page Compression

    InnoDB page compression can be monitored by querying the following status variables with SHOW GLOBAL STATUS:

    Status Variable
    Description

    Bytes saved by compression

    Number of 512 sectors trimmed

    Number of 1024 sectors trimmed

    Number of 2048 sectors trimmed

    Number of 4096 sectors trimmed

    Number of 8192 sectors trimmed

    With InnoDB page compression, a page is only compressed when it is flushed to disk. This means that if you are monitoring InnoDB page compression via these status variables, then the status variables values will only get incremented when the dirty pages are flushed to disk, which does not necessarily happen immediately:

    Compatibility with Backup Tools

    mariadb-backup supports InnoDB page compression.

    Percona XtraBackup does not support InnoDB page compression.

    Acknowledgements

    • InnoDB page compression was developed by collaborating with Fusion-io. Special thanks especially to Dhananjoy Das and Torben Mathiasen.

    See Also

    • Storage-Engine Independent Column Compression

    • Atomic Write Support

    • MariaDB Introduces Atomic Writes

    • Small Datum: Third day with InnoDB transparent page compression

    This page is licensed: CC BY-SA / Gnu FDL

    colname DECIMAL(14,6)
    INSERT INTO xxx VALUES (-2658.74);
    CREATE TABLE birthday (
      Name VARCHAR(17),
      Bday DATE field_length=10 date_format='MM/DD/YYYY',
      Btime TIME field_length=8 date_format='hh:mm tt')
    engine=CONNECT table_type=CSV;
    
    INSERT INTO birthday VALUES ('Charlie','2012-11-12','15:30:00');
    
    SELECT * FROM birthday;
    Charlie,11/12/2012,03:30 PM
    CREATE TABLE t1 (a INT, b CHAR(10)) ENGINE=CONNECT;
    INSERT INTO t1 VALUES (0,'zero'),(1,'one'),(2,'two'),(NULL,'???');
    SELECT * FROM t1 WHERE a IS NULL;
    SELECT * FROM t1 WHERE a = 0;
    CREATE TABLE t1 (a INT NOT NULL, b CHAR(10) NOT NULL) ENGINE=CONNECT;
    INSERT INTO t1 VALUES (0,'zero'),(1,'one'),(2,'two'),(NULL,'???');
    SELECT * FROM t1 WHERE a IS NULL;
    SELECT * FROM t1 WHERE a = 0;
    SET GLOBAL innodb_compression_algorithm='lzma';
    [mariadb]
    ...
    innodb_compression_algorithm=lzma
    SHOW GLOBAL STATUS WHERE Variable_name IN (
       'Innodb_have_lz4', 
       'Innodb_have_lzo', 
       'Innodb_have_lzma', 
       'Innodb_have_bzip2', 
       'Innodb_have_snappy'
    );
    +--------------------+-------+
    | Variable_name      | Value |
    +--------------------+-------+
    | Innodb_have_lz4    | OFF   |
    | Innodb_have_lzo    | OFF   |
    | Innodb_have_lzma   | ON    |
    | Innodb_have_bzip2  | OFF   |
    | Innodb_have_snappy | OFF   |
    +--------------------+-------+
    wget https://downloads.mariadb.com/MariaDB/mariadb-10.4.8/source/mariadb-10.4.8.tar.gz
    tar -xvzf mariadb-10.4.8.tar.gz
    cd mariadb-10.4.8/
    cmake .
    make
    make install
    make package
    SET GLOBAL innodb_compression_default=ON;
    SET GLOBAL innodb_file_per_table=ON;
    
    SET GLOBAL innodb_file_format='Barracuda';
    
    SET GLOBAL innodb_default_row_format='dynamic';
    
    SET GLOBAL innodb_compression_algorithm='lzma';
    
    SET SESSION  innodb_compression_default=ON;
    
    CREATE TABLE users (
       user_id INT NOT NULL, 
       b VARCHAR(200), 
       PRIMARY KEY(user_id)
    ) 
       ENGINE=InnoDB;
    [mariadb]
    ...
    innodb_compression_default=ON
    SET GLOBAL innodb_file_per_table=ON;
    
    SET GLOBAL innodb_file_format='Barracuda';
    
    SET GLOBAL innodb_default_row_format='dynamic';
    
    SET GLOBAL innodb_compression_algorithm='lzma';
    
    CREATE TABLE users (
       user_id INT NOT NULL, 
       b VARCHAR(200), 
       PRIMARY KEY(user_id)
    ) 
       ENGINE=InnoDB
       PAGE_COMPRESSED=1;
    SET GLOBAL innodb_compression_level=9;
    [mariadb]
    ...
    innodb_compression_level=9
    SET GLOBAL innodb_file_per_table=ON;
    
    SET GLOBAL innodb_file_format='Barracuda';
    
    SET GLOBAL innodb_default_row_format='dynamic';
    
    SET GLOBAL innodb_compression_algorithm='lzma';
    
    CREATE TABLE users (
       user_id INT NOT NULL, 
       b VARCHAR(200), 
       PRIMARY KEY(user_id)
    ) 
       ENGINE=InnoDB
       PAGE_COMPRESSED=1
       PAGE_COMPRESSION_LEVEL=9;
    SET GLOBAL innodb_compression_failure_threshold_pct=10;
    [mariadb]
    ...
    innodb_compression_failure_threshold_pct=10
    SET GLOBAL innodb_compression_pad_pct_max=75;
    [mariadb]
    ...
    innodb_compression_pad_pct_max=75
    fallocate(file_handle, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, file_offset, remainder_len);
    DeviceIoControl(file_handle, FSCTL_SET_SPARSE, inbuf, inbuf_size, 
       outbuf, outbuf_size,  NULL, &overlapped)
    ...
    DeviceIoControl(file_handle, FSCTL_SET_ZERO_DATA, inbuf, inbuf_size, 
       outbuf, outbuf_size,  NULL, &overlapped)
    [mariadb]
    ...
    innodb_use_trim=ON
    innodb_use_fallocate=ON
    CREATE TABLE `tab` (
         `id` INT(11) NOT NULL,
         `str` VARCHAR(50) DEFAULT NULL,
         PRIMARY KEY (`id`)
       ) ENGINE=InnoDB;
     
    INSERT INTO tab VALUES (1, 'str1');
    
    SHOW GLOBAL STATUS LIKE 'Innodb_num_pages_page_compressed';
    +----------------------------------+-------+
    | Variable_name                    | Value |
    +----------------------------------+-------+
    | Innodb_num_pages_page_compressed | 0     |
    +----------------------------------+-------+
     
    SET GLOBAL innodb_file_per_table=ON;
    
    SET GLOBAL innodb_file_format='Barracuda';
    
    SET GLOBAL innodb_default_row_format='dynamic';
    
    SET GLOBAL innodb_compression_algorithm='lzma';
     
    ALTER TABLE tab PAGE_COMPRESSED=1;
    
    SHOW GLOBAL STATUS LIKE 'Innodb_num_pages_page_compressed';
    +----------------------------------+-------+
    | Variable_name                    | Value |
    +----------------------------------+-------+
    | Innodb_num_pages_page_compressed | 0     |
    +----------------------------------+-------+
     
    SELECT SLEEP(10);
    +-----------+
    | SLEEP(10) |
    +-----------+
    |         0 |
    +-----------+
     
    SHOW GLOBAL STATUS LIKE 'Innodb_num_pages_page_compressed';
    +----------------------------------+-------+
    | Variable_name                    | Value |
    +----------------------------------+-------+
    | Innodb_num_pages_page_compressed | 3     |
    +----------------------------------+-------+
    double
    float
    decimal
    date
    datetime
    time
    timestamp
    year
    double
    float
    decimal
    date
    bigint
    enum
    set
    connect_type_conv
    connect_type_conv
    connect_type_conv=NO

    snappy

    Pages are compressed using the snappy algorithm.

    Innodb_page_compression_trim_sect16384

    Number of 16384 sectors trimmed

    Innodb_page_compression_trim_sect32768

    Number of 32768 sectors trimmed

    Innodb_num_pages_page_compressed

    Number of pages compressed

    Innodb_num_page_compressed_trim_op

    Number of trim operations

    Innodb_num_page_compressed_trim_op_saved

    Number of trim operations saved

    Innodb_num_pages_page_decompressed

    Number of pages decompressed

    Innodb_num_pages_page_compression_error

    Number of compression errors

    zlib
    COMPRESSED
    InnoDB holepunch compression vs the filesystem in MariaDB 10.1
    Significant performance boost with new MariaDB page compression on FusionIO
    INFLOW '14: NVM Compression—Hybrid Flash-Aware Application Level Compression
    zlib
    lz4
    lzo
    lzma
    bzip2
    Innodb_have_lz4
    lz4
    Innodb_have_lzo
    lzo
    Innodb_have_lzma
    lzma
    Innodb_have_bzip2
    bzip2
    Innodb_have_snappy
    snappy
    Innodb_page_compression_saved
    Innodb_page_compression_trim_sect512
    Innodb_page_compression_trim_sect1024
    Innodb_page_compression_trim_sect2048
    Innodb_page_compression_trim_sect4096
    Innodb_page_compression_trim_sect8192
    Why is the Software Called MariaDB?
    report all bugs
    JIRA
    JIRA
    report the bug
    Reporting Bugs

    CONNECT ODBC Table Type: Accessing Tables From Another DBMS

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    ODBC (Open Database Connectivity) is a standard API for accessing database management systems (DBMS). CONNECT uses this API to access data contained in other DBMS without having to implement a specific application for each one. An exception is the access to MySQL that should be done using the MYSQL table type.

    Note: On Linux, unixODBC must be installed.

    These tables are given the type ODBC. For example, if a "Customers" table is contained in an Access™ database you can define it with a command such as:

    Tabname option defaults to the table name. It is required if the source table name is different from the name of the CONNECT table. Note also that for some data sources this name is case sensitive.

    Often, because CONNECT can retrieve the table description using ODBC catalog functions, the column definitions can be unspecified. For instance this table can be simply created as:

    The BLOCK_SIZE specification are used later to set the RowsetSize when retrieving rows from the ODBC table. A reasonably large RowsetSize can greatly accelerate the fetching process.

    If you specify the column description, the column names of your table must exist in the data source table. However, you are not obliged to define all the data source columns and you can change the order of the columns. Some type conversion can also be done if appropriate. For instance, to access the FireBird sample table EMPLOYEE, you could define your table as:

    This definition ignores the FIRST_NAME, LAST_NAME, JOB_CODE, and JOB_GRADE columns. It places the FULL_NAME last column of the original table in second position. The type of the HIRE_DATE column was changed from timestamp todate and the type of the DEPT_NO column was changed from char tointeger.

    Currently, some restrictions apply to ODBC tables:

    1. Cursor type is forward only (sequential reading).

    2. No indexing of ODBC tables (do not specify any columns as key). However, because CONNECT can often add a where clause to the query sent to the data source, indexing are used by the data source if it supports it. (Remote indexing is available with version 1.04, released with )

    3. CONNECT ODBC supports and . and are also supported in a somewhat restricted way (see below). For other operations, use an ODBC table with the EXECSRC option (see below) to directly send proper commands to the data source.

    Random Access of ODBC Tables

    In CONNECT version 1.03 (until ) ODBC tables are not indexable. Version 1.04 (from ) adds remote indexing facility to the ODBC table type.

    However, some queries require random access to an ODBC table; for instance when it is joined to another table or used in an order by queries applied to a long column or large tables.

    There are several ways to enable random (position) access to a CONNECT ODBC table. They are dependant on the following table options:

    Option
    Type
    Used For

    * - To be specified in the option_list.

    When dealing with small tables, the simpler way to enable random access is to specify a rowset size equal or larger than the table size (or the result set size if a push down where clause is used). This means that the whole result is in memory on the first fetch and CONNECT will use it for further positional accesses.

    Another way to have the result set in memory is to use the memory option. This option can be set to the following values:

    0. No memory used (the default). Best when the table is read sequentially as in SELECT statements with only eventual WHERE clauses.1. Memory size required is calculated during the first sequential table read. The allocated memory is filled during the second sequential read. Then the table rows are retrieved from the memory. This should be used when the table are accessed several times randomly, such as in sub-selects or being the target table of a join.2. A first query is executed to get the result set size and the needed memory is allocated. It is filled on the first sequential reading. Then random access of the table is possible. This can be used in the case of ORDER BY clauses, when MariaDB uses position reading.

    Note that the best way to handle ORDER BY is to set the max_length_for_sort_data variable to a larger value (its default value is 1024 that is pretty small). Indeed, it requires less memory to be used, particularly when a WHERE clause limits the retrieved data set. This is because in the case of an order by query, MariaDB firstly retrieves the sequentially the result set and the position of each records. Often the sort can be done from the result set if it is not too big. But if too big, or if it implies some “long” columns, only the positions are sorted and MariaDB retrieves the final result from the table read in random order. If setting the max_length_for_sort_data variable is not feasible or does not work, to be able to retrieve table data from memory after the first sequential read, the memory option must be set to 2.

    For tables too large to be stored in memory another possibility is to make your table to use a scrollable cursor. In this case each randomly accessed row can be retrieved from the data source specifying its cursor position, which is reasonably fast. However, scrollable cursors are not supported by all data sources.

    With CONNECT version 1.04 (from ), another way to provide random access is to specify some columns to be indexed. This should be done only when the corresponding column of the source table is also indexed. This should be used for tables too large to be stored in memory and is similar to the remote indexing used by the and by the .

    There remains the possibility to extract data from the external table and to construct another table of any file format from the data source. For instance to construct a fixed formatted DOS table containing the CUSTOMER table data, create the table as

    Now you can use custfix for fast database operations on the copied_customer_ table data.

    Retrieving data from a spreadsheet

    ODBC can also be used to create tables based on tabular data belonging to an Excel spreadsheet:

    This supposes that a tabular zone of the sheet including column headers is defined as a table named CONTACT or using a “named reference”. Refer to the Excel documentation for how to specify tables inside sheets. Once done, you can ask:

    This will extract the data from Excel and display:

    Nom
    Fonction
    Societe

    Here again, the columns description was left to CONNECT when creating the table.

    Multiple ODBC tables

    The concept of multiple tables can be extended to ODBC tables when they are physically represented by files, for instance to Excel or Access tables. The condition is that the connect string for the table must contain a field DBQ=filename, in which wildcard characters can be included as for multiple=1 tables in their filename. For instance, a table contained in several Excel files CA200401.xls, CA200402.xls, ...CA200412.xls can be created by a command such as:

    Providing that in each file the applying information is internally set for Excel as a table named "bank account". This extension to ODBC does not support_multiple_=2. The qchar option was specified to make the identifiers quoted in the select statement sent to ODBC, in particular the when the table or column names contain blanks, to avoid SQL syntax errors.

    Caution: Avoid accessing tables belonging to the currently running MariaDB server via the MySQL ODBC connector. This may not work and may cause the server to be restarted.

    Performance consideration

    To avoid extracting entire tables from an ODBC source, which can be a lengthy process, CONNECT extracts the "compatible" part of query WHERE clauses and adds it to the ODBC query. Compatible means that it must be understood by the data source. In particular, clauses involving scalar functions are not kept because the data source may have different functions than MariaDB or use a different syntax. Of course, clauses involving sub-select are also skipped. This will transfer eventual indexing to the data source.

    Take care with clauses involving string items because you may not know whether they are treated by the data source as case sensitive or case insensitive. If in doubt, make your queries as if the data source was processing strings as case sensitive to avoid incomplete results.

    Using ODBC Tables inside correlated sub-queries

    Unlike not correlated subqueries that are executed only once, correlated subqueries are executed many times. It is what ODBC calls a "requery". Several methods can be used by CONNECT to deal with this depending on the setting of the MEMORY or SCROLLABLE Boolean options:

    Option
    Description

    Note: the MEMORY and SCROLLABLE options must be specified in the OPTION _ LIST.

    Because the table is accessed several times, this can make queries last very long except for small tables and is almost unacceptable for big tables. However, if it cannot be avoided, using the memory method is the best choice and can be more than four times faster than the default method. If it is supported by the driver, using a scrollable cursor is slightly slower than using memory but can be an alternative to avoid memory problems when the sub-query returns a huge result set.

    If the result set is of reasonable size, it is also possible to specify the block_size option equal or slightly larger than the result set. The whole result set being read on the first fetch, can be accessed many times without having to do anything else.

    Another good workaround is to replace within the correlated sub-query the ODBC table by a local copy of it because MariaDB is often able to optimize the query and to provide a very fast execution.

    Accessing specified views

    Instead of specifying a source table name via the TABNAME option, it is possible to retrieve data from a “view” whose definition is given in a new option SRCDEF. For instance:

    Or simply, because CONNECT can retrieve the returned column definition:

    Then, when executing for instance:

    The processing of the group by is done by the data source, which returns only the generated result set on which only the where clause is performed locally. The result:

    country
    customers

    This makes possible to let the data source do complicated operations, such as joining several tables or executing procedures returning a result set. This minimizes the data transfer through ODBC.

    Data Modifying Operations

    The only data modifying operations are the , and commands. They can be executed successfully only if the data source database or tables are not read/only.

    INSERT Command

    When inserting values to an ODBC table, local values are used and sent to the ODBC table. This does not make any difference when the values are constant but in a query such as:

    Where t1 is an ODBC table, t2 is a locally defined table that must exist on the local server. Besides, it is a good way to create a distant ODBC table from local data.

    CONNECT does not directly support INSERT commands such as:

    Sure enough, the “on duplicate key update” part of it is ignored, and will result in error if the key value is duplicated.

    UPDATE and DELETE Commands

    Unlike the command, and are supported in a simplified way. Only simple table commands are supported; CONNECT does not support multi-table commands, commands sent from a procedure, or issued via a trigger. These commands are just rephrased to correspond to the data source syntax and sent to the data source for execution. Let us suppose we created the table:

    We can populate it by:

    The function now() are executed by MariaDB and it returned value sent to the ODBC table.

    Let us see what happens when updating the table. If we use the query:

    CONNECT will rephrase the command as:

    What it did is just to replace the local table name with the remote table name and change all the back ticks to blanks or to the data source identifier quoting characters if QUOTED is specified. Then this command are sent to the data source to be executed by it.

    This is simpler and can be faster than doing a positional update using a cursor and commands such as “select ... for update of ...” that are not supported by all data sources. However, there are some restrictions that must be understood due to the way it is handled by MariaDB.

    1. MariaDB does not know about all the above. The command are parsed as if it were to be executed locally. Therefore, it must respect the MariaDB syntax.

    2. Being executed by the data source, the (rephrased) command must also respect the data source syntax.

    3. All data referenced in the SET and WHERE clause belongs to the data source.

    This is possible because both MariaDB and the data source are using the SQL language. But you must use only the basic features that are part of the core SQL language. For instance, keywords like IGNORE or LOW_PRIORITY will cause syntax error with many data source.

    Scalar function names also can be different, which severely restrict the use of them. For instance:

    This will not work with SQLite3, the data source returning an “unknown scalar function” error message. Note that in this particular case, you can rephrase it to:

    This understood by both parsers, and even if this function would return NULL executed by MariaDB, it does return the current date when executed by SQLite3. But this begins to become too trickery so to overcome all these restrictions, and permit to have all types of commands executed by the data source, CONNECT provides a specific ODBC table subtype described now.

    Sending commands to a Data Source

    This can be done using a special subtype of ODBC table. Let us see this in an example:

    The key points in this create statement are the EXECSRC option and the column definition.

    The EXECSRC option tells that this table are used to send a command to the data source. Most of the sent commands do not return result set. Therefore, the table columns are used to specify the command to be executed and to get the result of the execution. The name of these columns can be chosen arbitrarily, their function coming from the FLAG value:

    How to use this table and specify the command to send? By executing a command such as:

    This will send the command specified in the WHERE clause to the data source and return the result of its execution. The syntax of the WHERE clause must be exactly as shown above. For instance:

    This command returns:

    command
    number
    message

    Now we can create a standard ODBC table on the newly created table:

    We can populate it directly using the supported statement:

    And see the result:

    ID
    name
    birth
    rem

    Any command, for instance , can be executed from the crlite table:

    This command returns:

    command
    number
    message

    Let us verify it:

    ID
    name
    birth
    rem

    The syntax to send a command is rather strange and may seem unnatural. It is possible to use an easier syntax by defining a stored procedure such as:

    Now you can send commands like this:

    This is possible only when sending one single command.

    Sending several commands together

    Grouping commands uses an easier syntax and is faster because only one connection is made for the all of them. To send several commands in one call, use the following syntax:

    When several commands are sent, the execution stops at the end of them or after a command that is in error. To continue after n errors, set the option maxerr=n (0 by default) in the option list.

    Note 1: It is possible to specify the SRCDEF option when creating an EXECSRC table. It are the command sent by default when a WHERE clause is not specified.

    Note 2: Most data sources do not allow sending several commands separated by semi-colons.

    Note 3: Quotes inside commands must be escaped. This can be avoided by using a different quoting character than the one used in the command

    Note 4: The sent command must obey the data source syntax.

    Note 5: Sent commands apply in the specified database. However, they can address any table within this database, or belonging to another database using the name syntax schema.tabname.

    Connecting to a Data Source

    There are two ways to establish a connection to a data source:

    1. Using SQLDriverConnect and a Connection String

    2. Using SQLConnect and a Data Source Name (DSN)

    The first way uses a Connection String whose components describe what is needed to establish the connection. It is the most complete way to do it and by default CONNECT uses it.

    The second way is a simplified way in which ODBC is just given the name of a DSN that must have been defined to ODBC or UnixOdbc and that contains the necessary information to establish the connection. Only the user name and password can be specified out of the DSN specification.

    Defining the Connection String

    Using the first way, the connection string must be specified. This is sometimes the most difficult task when creating ODBC tables because, depending on the operating system and the data source, this string can widely differ.

    The format of the ODBC Connection String is:

    Where character-string has zero or more characters; identifier has one or more characters; attribute- keyword is not case-sensitive; attribute-value may be case-sensitive; and the value of the DSN keyword does not consist solely of blanks. Due to the connection string grammar, keywords and attribute values that contain the characters []{}(),;?*=!@ should be avoided. The value of the DSN keyword cannot consist only of blanks, and should not contain leading blanks. Because of the grammar of the system information, keywords and data source names cannot contain the backslash () character. Applications do not have to add braces around the attribute value after the DRIVER keyword unless the attribute contains a semicolon (;), in which case the braces are required. If the attribute value that the driver receives includes the braces, the driver should not remove them, but they should be part of the returned connection string.

    ODBC Defined Connection Attributes

    The ODBC defined attributes are:

    • DSN - the name of the data source to connect to. You must create this before attempting to refer to it. You create new DSNs through the ODBC Administrator (Windows), ODBCAdmin (unixODBC's GUI manager) or in the odbc.ini file.

    • DRIVER - the name of the driver to connect to. You can use this in DSN-less connections.

    • FILEDSN - the name of a file containing the connection attributes.

    • UID/PWD - any username and password the database requires for authentication.

    Other attributes are DSN dependent attributes. The connection string can give the name of the driver in the DRIVER field or the data source in the DSN field (attention! meet the spelling and case) and has other fields that depend on the data source. When specifying a file, the DBQ field must give the full path and name of the file containing the table. Refer to the specific ODBC connector documentation for the exact syntax of the connection string.

    Using a Predefined DSN

    This is done by specifying in the option list the Boolean option “UseDSN” as yes or 1. In addition, string options “user” and “password” can be optionally specified in the option list.

    When doing so, the connection string just contains the name of the predefined Data Source. For instance:

    Note: the connection data source name (limited to 32 characters) should not be preceded by “DSN=”.

    ODBC Tables on Linux/Unix

    In order to use ODBC tables, you will need to have unixODBC installed. Additionally, you will need the ODBC driver for your foreign server's protocol. For example, for MS SQL Server or Sybase, you will need to have FreeTDS installed.

    Make sure the user running mysqld (usually the mysql user) has permission to the ODBC data source configuration and the ODBC drivers. If you get an error on Linux/Unix when using TABLE_TYPE=ODBC:

    You must make sure that the user running mysqld (usually "mysql") has enough permission to load the ODBC driver library. It can happen that the driver file does not have enough read privileges (use chmod to fix this), or loading is prevented by SELinux configuration (see below).

    Try this command in a shell to check if the driver had enough permission:

    SELinux

    SELinux can cause various problems. If you think SELinux is causing problems, check the system log (e.g. /var/log/messages) or the audit log (e.g. /var/log/audit/audit.log).

    mysqld can't load some executable code, so it can't use the ODBC driver.

    Example error:

    Audit log:

    mysqld can't open TCP sockets on some ports, so it can't connect to the foreign server.

    Example error:

    Audit log:

    ODBC Catalog Information

    Depending on the version of the used ODBC driver, some additional information on the tables are existing, such as table QUALIFIER or OWNER for old versions, now named CATALOG or SCHEMA since version 3.

    CATALOG is apparently rarely used by most data sources, but SCHEMA (formerly OWNER) is and corresponds to the DATABASE information of MySQL.

    The issue is that if no schema name is specified, some data sources return information for all schemas while some others only return the information of the “default” schema. In addition, the used “schema” or “database” is sometimes implied by the connection string and sometimes is not. Sometimes, it also can be included in a data source definition.

    CONNECT offers two ways to specify this information:

    1. When specified, the DBNAME create table option is regarded by ODBC tables as the SCHEMA name.

    2. Table names can be specified as “cat.sch.tab” allowing to set the catalog and schema info.

    When both are used, the qualified table name has precedence over DBNAME . For instance:

    Tabname
    DBname
    Description

    When creating a standard ODBC table, you should make sure only one source table is specified. Specifying more than one source table must be done only for CONNECT catalog tables (with CATFUNC=tables or columns).

    In particular, when column definition is left to the Discovery feature, if tables with the same name are present in several schemas and the schema name is not specified, several columns with the same name are generated. This will make the creation fail with a not very explicit error message.

    Note: With some ODBC drivers, the DBNAME option or qualified table name is useless because the schema implied by the connection string or the definition of the data source has priority over the specified DBNAME .

    Table name case

    Another issue when dealing with ODBC tables is the way table and column names are handled regarding of the case.

    For instance, Oracle follows to the SQL standard here. It converts non-quoted identifiers to upper case. This is correct and expected. PostgreSQL is not standard. It converts identifiers to lower case. MySQL/MariaDB is not standard. They preserve identifiers on Linux, and convert to lower case on Windows.

    Think about that if you fail to see a table or a column on an ODBC data source.

    Non-ASCII Character Sets with Oracle

    When connecting through ODBC, the MariaDB Server operates as a client to the foreign database management system. As such, it requires that you configure MariaDB as you would configure native clients for the given database server.

    In the case of connecting to Oracle, when using non-ASCI character sets, you need to properly set the NLS_LANG environment variable before starting the MariaDB Server.

    For instance, to test this on Oracle, create a table that contains a series of special characters:

    Then create a connecting table on MariaDB and attempt the same query:

    While the character set is defined in a way that satisfies MariaDB, it has not been defined for Oracle, (that is, setting the NLS_LANG environment variable). As a result, Oracle is not providing the characters you want to MariaDB and Connect. The specific method of setting the NLS_LANG variable can vary depending on your operating system or distribution. If you're experiencing this issue, check your OS documentation for more details on how to properly set environment variables.

    Using systemd

    With Linux distributions that use , you need to set the environment variable in the service file, (systemd doesn't read from the /etc/environment file).

    This is done by setting the Environment variable in the [Service] unit. For instance,

    Then restart MariaDB,

    You can now retrieve the appropriate characters from Oracle tables:

    Using Windows

    Microsoft Windows doesn't ignore environment variables the way systemd does on Linux, but it does require that you set the NLS_LANG environment variable on your system. In order to do so, you need to open an elevated command-prompt, (that is, Cmd.exe with administrative privileges).

    From here, you can use the Setx command to set the variable. For instance,

    Note: For more detail about this, see .

    OPTION_LIST Values Supported by the ODBC Tables

    The following options can be given as comma-separated string to the OPTION_LIST value in the CREATE TABLE statement.

    Name
    Default
    Description

    This page is licensed: GPLv2

    CREATE TABLE Customer (
      CustomerID VARCHAR(5),
      CompanyName VARCHAR(40),
      ContactName VARCHAR(30),
      ContactTitle VARCHAR(30),
      Address VARCHAR(60),
      City VARCHAR(15),
      Region VARCHAR(15),
      PostalCode VARCHAR(10),
      Country VARCHAR(15),
      Phone VARCHAR(24),
      Fax VARCHAR(24))
    ENGINE=CONNECT table_type=ODBC block_size=10
    tabname='Customers'
    CONNECTION='DSN=MS Access Database;DBQ=C:/Program
    Files/Microsoft Office/Office/1033/FPNWIND.MDB;';

    Vandamme Anna

    GDF

    Thomas Willy

    Europ Assistance France

    Thomas Dominique

    Acoss (DG des URSSAF)

    Thomas Berengere

    Responsable SI Decisionnel

    DEXIA Credit Local

    Husy Frederic

    Responsable Decisionnel

    Neuf Cegetel

    Lemonnier Nathalie

    Directeur Marketing Client

    Louis Vuitton

    Louis Loic

    Reporting International Decisionnel

    Accor

    Menseau Eric

    Orange France

    USA

    13

    Venezuela

    4

    4

    John

    1968-05-30

    Last

    SAVEFILE - request the DSN attributes are saved in this file.

    t1

    The t1 table in the default or all schema depending on the DSN

    %.t1

    The t1 table in all schemas for all DSN

    test.%

    All tables in the test schema

    Block_Size

    Integer

    Specifying the rowset size.

    Memory*

    Integer

    Storing the result set in memory.

    Scrollable*

    Boolean

    Using a scrollable cursor.

    Boisseau Frederic

    9 Telecom

    Martelliere Nicolas

    Vidal SA (Groupe UBM)

    Remy Agathe

    Price Minister

    Du Halgouet Tanguy

    Danone

    Default

    Implementing "requery" by discarding the current result set and re submitting the query (as MFC does)

    Memory=1 or 2

    Storing the result set in memory as MYSQL tables do.

    Scrollable=Yes

    Using a scrollable cursor.

    Brazil

    9

    France

    11

    Germany

    11

    Mexico

    5

    Spain

    5

    UK

    7

    Flag=0:

    The command to execute.

    Flag=1:

    The affected rows, or -1 in case of error, or the result number of column if the command returns a result set.

    Flag=2:

    The returned (eventually error) message.

    CREATE TABLE lite (ID integer primary key autoincrement, name...

    0

    Affected rows

    1

    Toto

    2005-06-12

    NULL

    2

    Foo

    NULL

    No ID

    3

    Truc

    1998-10-27

    update lite set birth = '2012-07-15' where ID = 2

    1

    Affected rows

    2

    Foo

    2012-07-15

    No ID

    test.t1

    The t1 table of the test schema.

    test.t1

    mydb

    The t1 table of the test schema (test has precedence)

    t1

    mydb

    The t1 table of the mydb schema

    %.%.%

    All tables in all catalogs and all schemas

    MaxRes

    0

    Maximum number of rows returned by catalog functions

    ConnectTimeout

    -1

    Connection timeout in seconds, unlimited by default

    QueryTimeout

    -1

    Query timeout in seconds, unlimited by default

    UseDSN

    false

    Use pre-configured DSN

    SELECT
    INSERT
    UPDATE
    DELETE
    MYSQL table type
    FEDERATED engine
    INSERT
    UPDATE
    DELETE
    INSERT
    UPDATE
    DELETE
    INSERT
    UPDATE
    systemd
    MDEV-17501

    NULL

    CREATE TABLE Customer ENGINE=CONNECT table_type=ODBC
      block_size=10 tabname='Customers'
      CONNECTION='DSN=MS Access Database;DBQ=C:/Program Files/Microsoft Office/Office/1033/FPNWIND.MDB;';
    CREATE TABLE empodbc (
      EMP_NO SMALLINT(5) NOT NULL,
      FULL_NAME VARCHAR(37) NOT NULL),
      PHONE_EXT VARCHAR(4) NOT NULL,
      HIRE_DATE DATE,
      DEPT_NO SMALLINT(3) NOT NULL,
      JOB_COUNTRY VARCHAR(15),
      SALARY DOUBLE(12,2) NOT NULL)
    ENGINE=CONNECT table_type=ODBC tabname='EMPLOYEE'
    CONNECTION='DSN=firebird';
    CREATE TABLE Custfix ENGINE=CONNECT File_name='customer.txt'
      table_type=fix block_size=20 AS SELECT * FROM customer;
    CREATE TABLE XLCONT
    ENGINE=CONNECT table_type=ODBC tabname='CONTACT'
    CONNECTION='DSN=Excel Files;DBQ=D:/Ber/Doc/Contact_BP.xls;';
    SELECT * FROM xlcont;
    CREATE TABLE ca04mul (DATE CHAR(19), OPERATION VARCHAR(64),
      Debit DOUBLE(15,2), Credit DOUBLE(15,2))
    ENGINE=CONNECT table_type=ODBC multiple=1
    qchar= '"' tabname='bank account'
    CONNECTION='DSN=Excel Files;DBQ=D:/Ber/CA/CA2004*.xls;';
    CREATE TABLE custnum (
      country VARCHAR(15) NOT NULL,
      customers INT(6) NOT NULL)
    ENGINE=CONNECT TABLE_TYPE=ODBC BLOCK_SIZE=10
    CONNECTION='DSN=MS Access Database;DBQ=C:/Program Files/Microsoft Office/Office/1033/FPNWIND.MDB;'
    SRCDEF='select country, count(*) as customers from customers group by country';
    CREATE TABLE custnum ENGINE=CONNECT TABLE_TYPE=ODBC BLOCK_SIZE=10
    CONNECTION='DSN=MS Access Database;DBQ=C:/Program Files/Microsoft Office/Office/1033/FPNWIND.MDB;'
    SRCDEF='select country, count(*) as customers from customers group by country';
    SELECT * FROM custnum WHERE customers > 3;
    INSERT INTO t1 SELECT * FROM t2;
    INSERT INTO t1 VALUES(2,'Deux') ON duplicate KEY UPDATE msg = 'Two';
    CREATE TABLE tolite (
      id INT(9) NOT NULL,
      nom VARCHAR(12) NOT NULL,
      nais DATE DEFAULT NULL,
      rem VARCHAR(32) DEFAULT NULL)
    ENGINE=CONNECT TABLE_TYPE=ODBC tabname='lite'
    CONNECTION='DSN=SQLite3 Datasource;Database=test.sqlite3'
    CHARSET=utf8 DATA_CHARSET=utf8;
    INSERT INTO tolite VALUES(1,'Toto',NOW(),'First'),
    (2,'Foo','2012-07-14','Second'),(4,'Machin','1968-05-30','Third');
    UPDATE tolite SET nom = 'Gillespie' WHERE id = 10;
    UPDATE lite SET nom = 'Gillespie' WHERE id = 10;
    UPDATE tolite SET nais = NOW() WHERE id = 2;
    UPDATE tolite SET nais = DATE('now') WHERE id = 2;
    CREATE TABLE crlite (
      command VARCHAR(128) NOT NULL,
      NUMBER INT(5) NOT NULL flag=1,
      message VARCHAR(255) flag=2)
    ENGINE=CONNECT table_type=odbc
    CONNECTION='Driver=SQLite3 ODBC Driver;Database=test.sqlite3;NoWCHAR=yes'
    option_list='Execsrc=1';
    SELECT * FROM crlite WHERE command = 'a command';
    SELECT * FROM crlite WHERE command =
    'CREATE TABLE lite (
    ID integer primary key autoincrement,
    name char(12) not null,
    birth date,
    rem varchar(32))';
    CREATE TABLE tlite
    ENGINE=CONNECT TABLE_TYPE=ODBC tabname='lite'
    CONNECTION='Driver=SQLite3 ODBC Driver;Database=test.sqlite3;NoWCHAR=yes'
    CHARSET=utf8 DATA_CHARSET=utf8;
    INSERT INTO tlite(name,birth) VALUES('Toto','2005-06-12');
    INSERT INTO tlite(name,birth,rem) VALUES('Foo',NULL,'No ID');
    INSERT INTO tlite(name,birth) VALUES('Truc','1998-10-27');
    INSERT INTO tlite(name,birth,rem) VALUES('John','1968-05-30','Last');
    SELECT * FROM tlite;
    SELECT * FROM crlite WHERE command =
    'update lite set birth = ''2012-07-14'' where ID = 2';
    SELECT * FROM tlite WHERE ID = 2;
    CREATE PROCEDURE send_cmd(cmd VARCHAR(255))
    MODIFIES SQL DATA
    SELECT * FROM crlite WHERE command = cmd;
    call send_cmd('drop tlite');
    SELECT * FROM crlite WHERE command IN (
      'update lite set birth = ''2012-07-14'' where ID = 2',
      'update lite set birth = ''2009-08-10'' where ID = 3');
    connection-string::= empty-string[;] | attribute[;] | attribute; connection-string
    empty-string ::=
    attribute ::= attribute-keyword=attribute-value | DRIVER=[{]attribute-value[}]
    attribute-keyword ::= DSN | UID | PWD | driver-defined-attribute-keyword
    attribute-value ::= character-string
    driver-defined-attribute-keyword = identifier
    CREATE TABLE tlite ENGINE=CONNECT TABLE_TYPE=ODBC tabname='lite'
    CONNECTION='SQLite3 Datasource' 
    OPTION_LIST='UseDSN=Yes,User=me,Password=mypass';
    Error Code: 1105 [unixODBC][Driver Manager]Can't open lib
    '/usr/cachesys/bin/libcacheodbc.so' : file not found
    sudo -u mysql ldd /usr/cachesys/bin/libcacheodbc.so
    Error Code: 1105 [unixODBC][Driver Manager]Can't open lib
    '/usr/cachesys/bin/libcacheodbc.so' : file not found
    type=AVC msg=audit(1384890085.406:76): avc: denied { execute }
    for pid=1433 comm="mysqld"
    path="/usr/cachesys/bin/libcacheodbc.so" dev=dm-0 ino=3279212
    scontext=unconfined_u:system_r:mysqld_t:s0
    tcontext=unconfined_u:object_r:usr_t:s0 tclass=file
    ERROR 1296 (HY000): Got error 174 '[unixODBC][FreeTDS][SQL Server]Unable to connect to data source' from CONNECT
    type=AVC msg=audit(1423094175.109:433): avc:  denied  { name_connect } for  pid=3193 comm="mysqld" dest=1433 scontext=system_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:mssql_port_t:s0 tclass=tcp_socket
    CREATE TABLE t1 (letter VARCHAR(4000));
    
    INSERT INTO t1 VALUES
       (UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW('C4'))),
       (UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW('C5'))),
       (UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW('C6')));
    
    SELECT letter, RAWTOHEX(letter) FROM t1;
    
    letter | RAWTOHEX(letter)
    -------|-----------------
    Ä     | C4
    Å     | C5
    Æ     | C6
    CREATE TABLE t1 (
       letter VARCHAR(4000))
    ENGINE=CONNECT
    DEFAULT CHARSET=utf8mb4
    CONNECTION='DSN=YOUR_DSN'
    TABLE_TYPE = 'ODBC'
    DATA_CHARSET = latin1
    TABNAME = 'YOUR_SCHEMA.T1';
    
    SELECT letter, HEX(letter) FROM t1;
    
    +--------+-------------+
    | letter | HEX(letter) |
    +--------+-------------+
    | A      | 	    41 |
    | ?      | 	    3F |
    | ?      | 	    3F |
    +--------+-------------+
    # systemctl edit mariadb.service
    
    [Service]
    Environment=NLS_LANG=GERMAN_GERMANY.WE8ISO8859P1
    # systemctl restart mariadb.service
    SELECT letter, HEX(letter) FROM t1;
    
    +--------+-------------+
    | letter | HEX(letter) |
    +--------+-------------+
    | Ä      | C384        |
    | Å      | C385        |
    | Æ      | C386        |
    +--------+-------------+
    Setx NLS_LANG GERMAN_GERMANY.WE8ISO8859P1 /m

    InnoDB Online DDL Operations with the INSTANT Alter Algorithm

    Discover the INSTANT algorithm, which modifies table metadata without rebuilding the table, enabling extremely fast schema changes like adding columns.

    Column Operations

    ALTER TABLE ... ADD COLUMN

    In and later, InnoDB supports adding columns to a table with set to INSTANT if the new column is the last column in the table. See for more information. If the table has a hidden FTS_DOC_ID column is present, then this is not supported.

    In and later, InnoDB supports adding columns to a table with set to INSTANT, regardless of where in the column list the new column is added.

    When this operation is performed with set to INSTANT, the tablespace file will have a non-canonical storage format. See for more information.

    With the exception of adding an column, this operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example, this succeeds:

    And this succeeds in and later:

    This applies to for tables.

    See for more information.

    ALTER TABLE ... DROP COLUMN

    In and later, InnoDB supports dropping columns from a table with set to INSTANT. See for more information.

    When this operation is performed with set to INSTANT, the tablespace file will have a non-canonical storage format. See for more information.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    This applies to for tables.

    ALTER TABLE ... MODIFY COLUMN

    This applies to for tables.

    Reordering Columns

    In and later, InnoDB supports reordering columns within a table with set to INSTANT. See for more information.

    When this operation is performed with set to INSTANT, the tablespace file will have a non-canonical storage format. See for more information.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    Changing the Data Type of a Column

    InnoDB does not support modifying a column's data type with set to INSTANT in most cases. There are some exceptions:

    • InnoDB supports increasing the length of VARCHAR columns with set to INSTANT, unless it would require changing the number of bytes requires to represent the column's length. A VARCHAR column that is between 0 and 255 bytes in size requires 1 byte to represent its length, while a VARCHAR column that is 256 bytes or longer requires 2 bytes to represent its length. This means that the length of a column cannot be increased with set to INSTANT if the original length was less than 256 bytes, and the new length is 256 bytes or more.

    • In and later, InnoDB supports increasing the length of VARCHAR

    The supported operations in this category support the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example, this fails:

    But this succeeds because the original length of the column is less than 256 bytes, and the new length is still less than 256 bytes:

    But this fails because the original length of the column is between 128 bytes and 255 bytes, and the new length is greater than 256 bytes:

    But this succeeds in and later because the table has ROW_FORMAT=REDUNDANT:

    And this succeeds in and later because the table has ROW_FORMAT=DYNAMIC and the column's original length is 127 bytes or less:

    And this succeeds in and later because the table has ROW_FORMAT=COMPRESSED and the column's original length is 127 bytes or less:

    But this fails even in and later because the table has ROW_FORMAT=DYNAMIC and the column's original length is between 128 bytes and 255 bytes:

    Changing a Column to NULL

    In and later, InnoDB supports modifying a column to allow values with set to INSTANT if the table option is set to . See for more information.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    Changing a Column to NOT NULL

    InnoDB does not support modifying a column to not allow values with set to INSTANT.

    For example:

    Adding a New ENUM Option

    InnoDB supports adding a new option to a column with set to INSTANT. In order to add a new option with set to INSTANT, the following requirements must be met:

    • It must be added to the end of the list.

    • The storage requirements must not change.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example, this succeeds:

    But this fails:

    Adding a New SET Option

    InnoDB supports adding a new option to a column with set to INSTANT. In order to add a new option with set to INSTANT, the following requirements must be met:

    • It must be added to the end of the list.

    • The storage requirements must not change.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example, this succeeds:

    But this fails:

    Removing System Versioning from a Column

    In and later, InnoDB supports removing from a column with set to INSTANT. In order for this to work, the system variable must be set to KEEP. See for more information.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    ALTER TABLE ... ALTER COLUMN

    This applies to for tables.

    Setting a Column's Default Value

    InnoDB supports modifying a column's value with set to INSTANT.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    Removing a Column's Default Value

    InnoDB supports removing a column's value with set to INSTANT.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    ALTER TABLE ... CHANGE COLUMN

    InnoDB supports renaming a column with set to INSTANT, unless the column's data type or attributes changed in addition to the name.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example, this succeeds:

    But this fails:

    This applies to for tables.

    Index Operations

    ALTER TABLE ... ADD PRIMARY KEY

    InnoDB does not support adding a primary key to a table with set to INSTANT.

    For example:

    This applies to for tables.

    ALTER TABLE ... DROP PRIMARY KEY

    InnoDB does not support dropping a primary key with set to INSTANT.

    For example:

    This applies to for tables.

    ALTER TABLE ... ADD INDEX and CREATE INDEX

    This applies to and for tables.

    Adding a Plain Index

    InnoDB does not support adding a plain index to a table with set to INSTANT.

    For example, this fails:

    And this fails:

    Adding a Fulltext Index

    InnoDB does not support adding a index to a table with set to INSTANT.

    For example, this fails:

    And this fails:

    Adding a Spatial Index

    InnoDB does not support adding a index to a table with set to INSTANT.

    For example, this fails:

    And this fails:

    ALTER TABLE ... ADD FOREIGN KEY

    InnoDB does not support adding foreign key constraints to a table with set to INSTANT.

    For example:

    This applies to for tables.

    ALTER TABLE ... DROP FOREIGN KEY

    InnoDB supports dropping foreign key constraints from a table with set to INSTANT.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    This applies to for tables.

    Table Operations

    ALTER TABLE ... AUTO_INCREMENT=...

    InnoDB supports changing a table's value with set to INSTANT.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    This applies to for tables.

    ALTER TABLE ... ROW_FORMAT=...

    InnoDB does not support changing a table's with set to INSTANT.

    For example:

    This applies to for tables.

    ALTER TABLE ... KEY_BLOCK_SIZE=...

    InnoDB does not support changing a table's with set to INSTANT.

    For example:

    This applies to for tables.

    ALTER TABLE ... PAGE_COMPRESSED=1 and ALTER TABLE ... PAGE_COMPRESSION_LEVEL=...

    In and later, InnoDB supports setting a table's value to 1 with set to INSTANT. InnoDB does not support changing a table's value from 1 to 0 with set to INSTANT.

    In these versions, InnoDB also supports changing a table's value with set to INSTANT.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    See for more information.

    For example, this succeeds:

    And this succeeds:

    But this fails:

    This applies to and for tables.

    ALTER TABLE ... DROP SYSTEM VERSIONING

    InnoDB does not support dropping from a table with set to INSTANT.

    For example:

    This applies to for tables.

    ALTER TABLE ... DROP CONSTRAINT

    In and later, InnoDB supports dropping a constraint from a table with set to INSTANT. See for more information.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    This applies to for tables.

    ALTER TABLE ... FORCE

    InnoDB does not support forcing a table rebuild with set to INSTANT.

    For example:

    This applies to for tables.

    ALTER TABLE ... ENGINE=InnoDB

    InnoDB does not support forcing a table rebuild with set to INSTANT.

    For example:

    This applies to for tables.

    OPTIMIZE TABLE ...

    InnoDB does not support optimizing a table with set to INSTANT.

    For example:

    This applies to for tables.

    ALTER TABLE ... RENAME TO and RENAME TABLE ...

    InnoDB supports renaming a table with set to INSTANT.

    This operation supports the exclusive locking strategy. This strategy can be explicitly chosen by setting the clause to EXCLUSIVE. When this strategy is used, concurrent DML is not permitted.

    For example, this succeeds:

    And this succeeds:

    This applies to and for tables.

    Limitations

    Limitations Related to Generated (Virtual and Persistent/Stored) Columns

    do not currently support online DDL for all of the same operations that are supported for "real" columns.

    See for more information on the limitations.

    Non-canonical Storage Format Caused by Some Operations

    Some operations cause a table's tablespace file to use a non-canonical storage format when the INSTANT algorithm is used. The affected operations include:

    These operations require the following non-canonical changes to the storage format:

    • A hidden metadata record at the start of the clustered index is used to store each column's value. This makes it possible to add new columns that have default values without rebuilding the table.

    • A in the hidden metadata record is used to store column mappings. This makes it possible to drop or reorder columns without rebuilding the table. This also makes it possible to add columns to any position or drop columns from any position in the table without rebuilding the table.

    • If a column is dropped, old records will contain garbage in that column's former position, and new records are written with values, empty strings, or dummy values.

    This non-canonical storage format has the potential to incur some performance or storage overhead for all subsequent DML operations. If you notice some issues like this and you want to normalize a table's storage format to avoid this problem, then you can do so by forcing a table rebuild by executing with set to INPLACE:

    However, keep in mind that there are certain scenarios where you may not be able to rebuild the table with set to INPLACE. See for more information on those cases. If you hit one of those scenarios, but you still want to rebuild the table, then you would have to do so with set to COPY.

    Known Bugs

    There are some known bugs that could lead to issues when an InnoDB DDL operation is performed using the algorithm. This algorithm will usually be chosen by default if the operation supports the algorithm.

    The effect of many of these bugs is that the table seems to forget that its tablespace file is in the .

    If you are concerned that a table may be affected by one of these bugs, then your best option would be to normalize the table structure. This can be done by rebuilding the table:

    If you are concerned about these bugs, and you want to perform an operation that supports the algorithm, but you want to avoid using that algorithm, then you can set the algorithm to and add the FORCE keyword to the statement:

    Closed Bugs

    • : This bug could cause a table to become corrupt if a column was added instantly. It is fixed in and .

    • : This bug could cause a table to become corrupt if a column was dropped instantly. It is fixed in .

    • : This bug could cause a table to become corrupt during page reorganization if a column was added instantly. It is fixed in and .

    This page is licensed: CC BY-SA / Gnu FDL

    columns with
    set to INSTANT with no restrictions if the
    table option is set to
    . See
    for more information.
  • In and later, InnoDB also supports increasing the length of VARCHAR columns with ALGORITHM set to INSTANT in a more limited manner if the ROW_FORMAT table option is set to COMPACT, DYNAMIC, or COMPRESSED. In this scenario, the following limitations apply:

    • The length can be increased with ALGORITHM set to INSTANT if the original length of the column is 127 bytes or less, and the new length of the column is 256 bytes or more.

    • The length can be increased with set to INSTANT if the original length of the column is 255 bytes or less, and the new length of the column is still 255 bytes or less.

    • The length can be increased with set to INSTANT if the original length of the column is 256 bytes or more, and the new length of the column is still 256 bytes or more.

    • The length can not be increased with set to INSTANT if the original length was between 128 bytes and 255 bytes, and the new length is 256 bytes or more.

    • See for more information.

  • MDEV-19783: This bug could cause a table to become corrupt if a column was added instantly. It is fixed in and
  • MDEV-20090: This bug could cause a table to become corrupt if columns were added, dropped, or reordered instantly. It is fixed in .

  • MDEV-18519: This bug could cause a table to become corrupt if a column was added instantly. It is fixed in MariaDB 10.6.9, , and .

  • MDEV-18519: This bug could cause a table to become corrupt if a column was added instantly. This isn't and won't be fixed in versions less than MariaDB 10.6.

  • ALGORITHM
    MDEV-11369
    ALGORITHM
    ALGORITHM
    Non-canonical Storage Format Caused by Some Operations
    auto-increment
    LOCK
    ALTER TABLE ... ADD COLUMN
    InnoDB
    Instant ADD COLUMN for InnoDB
    ALGORITHM
    MDEV-15562
    ALGORITHM
    Non-canonical Storage Format Caused by Some Operations
    LOCK
    ALTER TABLE ... DROP COLUMN
    InnoDB
    ALTER TABLE ... MODIFY COLUMN
    InnoDB
    ALGORITHM
    MDEV-15562
    ALGORITHM
    Non-canonical Storage Format Caused by Some Operations
    LOCK
    ALGORITHM
    ALGORITHM
    ALGORITHM
    LOCK
    NULL
    ALGORITHM
    ROW_FORMAT
    REDUNDANT
    MDEV-15563
    LOCK
    NULL
    ALGORITHM
    ENUM
    ALGORITHM
    ENUM
    ALGORITHM
    LOCK
    SET
    ALGORITHM
    SET
    ALGORITHM
    LOCK
    system versioning
    ALGORITHM
    system_versioning_alter_history
    MDEV-16330
    LOCK
    ALTER TABLE ... ALTER COLUMN
    InnoDB
    DEFAULT
    ALGORITHM
    LOCK
    DEFAULT
    ALGORITHM
    LOCK
    ALGORITHM
    LOCK
    ALTER TABLE ... CHANGE COLUMN
    InnoDB
    ALGORITHM
    ALTER TABLE ... ADD PRIMARY KEY
    InnoDB
    ALGORITHM
    ALTER TABLE ... DROP PRIMARY KEY
    InnoDB
    ALTER TABLE ... ADD INDEX
    CREATE INDEX
    InnoDB
    ALGORITHM
    FULLTEXT
    ALGORITHM
    SPATIAL
    ALGORITHM
    ALGORITHM
    ALTER TABLE ... ADD FOREIGN KEY
    InnoDB
    ALGORITHM
    LOCK
    ALTER TABLE ... DROP FOREIGN KEY
    InnoDB
    AUTO_INCREMENT
    ALGORITHM
    LOCK
    ALTER TABLE ... AUTO_INCREMENT=...
    InnoDB
    row format
    ALGORITHM
    ALTER TABLE ... ROW_FORMAT=...
    InnoDB
    KEY_BLOCK_SIZE
    ALGORITHM
    KEY_BLOCK_SIZE=...
    InnoDB
    PAGE_COMPRESSED
    ALGORITHM
    PAGE_COMPRESSED
    ALGORITHM
    PAGE_COMPRESSION_LEVEL
    ALGORITHM
    LOCK
    MDEV-16328
    ALTER TABLE ... PAGE_COMPRESSED=...
    ALTER TABLE ... PAGE_COMPRESSION_LEVEL=...
    InnoDB
    system versioning
    ALGORITHM
    ALTER TABLE ... DROP SYSTEM VERSIONING
    InnoDB
    CHECK
    ALGORITHM
    MDEV-16331
    LOCK
    ALTER TABLE ... DROP CONSTRAINT
    InnoDB
    ALGORITHM
    ALTER TABLE ... FORCE
    InnoDB
    ALGORITHM
    ALTER TABLE ... ENGINE=InnoDB
    InnoDB
    ALGORITHM
    OPTIMIZE TABLE
    InnoDB
    ALGORITHM
    LOCK
    ALTER TABLE ... RENAME TO
    RENAME TABLE
    InnoDB
    Generated columns
    Generated (Virtual and Persistent/Stored) Columns: Statement Support
    Adding a column.
    Dropping a column.
    Reordering columns.
    DEFAULT
    BLOB
    NULL
    ALTER TABLE ... FORCE
    ALGORITHM
    ALGORITHM
    InnoDB Online DDL Operations with ALGORITHM=INPLACE: Limitations
    ALGORITHM
    INSTANT
    non-canonical storage format
    INSTANT
    INPLACE
    ALTER TABLE
    MDEV-20066
    MDEV-20117
    MDEV-19743
    ALGORITHM
    ROW_FORMAT
    REDUNDANT
    MDEV-15563

    InnoDB Online DDL Operations with the INPLACE Alter Algorithm

    Learn about operations supported by the INPLACE algorithm, which rebuilds the table but allows concurrent DML, offering a balance between performance and availability.

    Supported Operations by Inheritance

    When the clause is set to INPLACE, the supported operations are a superset of the operations that are supported when the clause is set to NOCOPY. Similarly, when the clause is set to NOCOPY, the supported operations are a superset of the operations that are supported when the clause is set to INSTANT

    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab ADD COLUMN c VARCHAR(50);
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab ADD COLUMN c VARCHAR(50) AFTER a;
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab DROP COLUMN c;
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(50) AFTER a;
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c INT;
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    ) CHARACTER SET=latin1;
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(100);
    Query OK, 0 rows affected (0.005 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(255)
    ) CHARACTER SET=latin1;
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(256);
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(200)
    ) ROW_FORMAT=REDUNDANT;
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(300);
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(127)
    ) ROW_FORMAT=DYNAMIC
      CHARACTER SET=latin1;
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(300);
    Query OK, 0 rows affected (0.003 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(127)
    ) ROW_FORMAT=COMPRESSED
      CHARACTER SET=latin1;
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(300);
    Query OK, 0 rows affected (0.003 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(128)
    ) ROW_FORMAT=DYNAMIC
      CHARACTER SET=latin1;
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(300);
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50) NOT NULL
    ) ROW_FORMAT=REDUNDANT;
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(50) NULL;
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    ) ROW_FORMAT=REDUNDANT;
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(50) NOT NULL;
    ERROR 1845 (0A000): ALGORITHM=INSTANT is not supported for this operation. Try ALGORITHM=INPLACE
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c ENUM('red', 'green')
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c ENUM('red', 'green', 'blue');
    Query OK, 0 rows affected (0.002 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c ENUM('red', 'green')
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c ENUM('red', 'blue', 'green');
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c SET('red', 'green')
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c SET('red', 'green', 'blue');
    Query OK, 0 rows affected (0.002 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c SET('red', 'green')
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c SET('red', 'blue', 'green');
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50) WITH SYSTEM VERSIONING
    );
    
    SET SESSION system_versioning_alter_history='KEEP';
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(50) WITHOUT SYSTEM VERSIONING;
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab ALTER COLUMN c SET DEFAULT 'NO value explicitly provided.';
    Query OK, 0 rows affected (0.003 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50) DEFAULT 'NO value explicitly provided.'
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab ALTER COLUMN c DROP DEFAULT;
    Query OK, 0 rows affected (0.002 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab CHANGE COLUMN c str VARCHAR(50);
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab CHANGE COLUMN c num INT;
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION sql_mode='STRICT_TRANS_TABLES';
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab ADD PRIMARY KEY (a);
    ERROR 1845 (0A000): ALGORITHM=INSTANT is not supported for this operation. Try ALGORITHM=INPLACE
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab DROP PRIMARY KEY;
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Dropping a primary key is not allowed without also adding a new primary key. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab ADD INDEX b_index (b);
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: ADD INDEX. Try ALGORITHM=NOCOPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    CREATE INDEX b_index ON tab (b);
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: ADD INDEX. Try ALGORITHM=NOCOPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ADD FULLTEXT INDEX b_index (b);
    Query OK, 0 rows affected (0.042 sec)
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab ADD FULLTEXT INDEX c_index (c);
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: ADD INDEX. Try ALGORITHM=NOCOPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    CREATE FULLTEXT INDEX b_index ON tab (b);
    Query OK, 0 rows affected (0.040 sec)
    
    SET SESSION alter_algorithm='INSTANT';
    CREATE FULLTEXT INDEX c_index ON tab (c);
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: ADD INDEX. Try ALGORITHM=NOCOPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c GEOMETRY NOT NULL
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab ADD SPATIAL INDEX c_index (c);
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: ADD INDEX. Try ALGORITHM=NOCOPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c GEOMETRY NOT NULL
    );
    
    SET SESSION alter_algorithm='INSTANT';
    CREATE SPATIAL INDEX c_index ON tab (c);
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: ADD INDEX. Try ALGORITHM=NOCOPY
    CREATE OR REPLACE TABLE tab1 (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50),
       d INT
    );
    
    CREATE OR REPLACE TABLE tab2 (
       a INT PRIMARY KEY,
       b VARCHAR(50)
    );
    
    SET SESSION foreign_key_checks=OFF;
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab1 ADD FOREIGN KEY tab2_fk (d) REFERENCES tab2 (a);
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: ADD INDEX. Try ALGORITHM=NOCOPY
    CREATE OR REPLACE TABLE tab2 (
       a INT PRIMARY KEY,
       b VARCHAR(50)
    );
    
    CREATE OR REPLACE TABLE tab1 (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50),
       d INT,
       FOREIGN KEY tab2_fk (d) REFERENCES tab2 (a)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab1 DROP FOREIGN KEY tab2_fk; 
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab AUTO_INCREMENT=100;
    Query OK, 0 rows affected (0.002 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    ) ROW_FORMAT=DYNAMIC;
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab ROW_FORMAT=COMPRESSED;
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Changing table options requires the table to be rebuilt. Try ALGORITHM=INPLACE
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    ) ROW_FORMAT=COMPRESSED
      KEY_BLOCK_SIZE=4;
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab KEY_BLOCK_SIZE=2;
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Changing table options requires the table to be rebuilt. Try ALGORITHM=INPLACE
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab PAGE_COMPRESSED=1;
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    ) PAGE_COMPRESSED=1
      PAGE_COMPRESSION_LEVEL=5;
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab PAGE_COMPRESSION_LEVEL=4;
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    ) PAGE_COMPRESSED=1;
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab PAGE_COMPRESSED=0;
    ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Changing table options requires the table to be rebuilt. Try ALGORITHM=INPLACE
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    ) WITH SYSTEM VERSIONING;
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab DROP SYSTEM VERSIONING;
    ERROR 1845 (0A000): ALGORITHM=INSTANT is not supported for this operation. Try ALGORITHM=INPLACE
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50),
       CONSTRAINT b_not_empty CHECK (b != '')
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab DROP CONSTRAINT b_not_empty;
    Query OK, 0 rows affected (0.002 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab FORCE;
    ERROR 1845 (0A000): ALGORITHM=INSTANT is not supported for this operation. Try ALGORITHM=INPLACE
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab ENGINE=InnoDB;
    ERROR 1845 (0A000): ALGORITHM=INSTANT is not supported for this operation. Try ALGORITHM=INPLACE
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SHOW GLOBAL VARIABLES WHERE Variable_name IN('innodb_defragment', 'innodb_optimize_fulltext_only');
    +-------------------------------+-------+
    | Variable_name                 | Value |
    +-------------------------------+-------+
    | innodb_defragment             | OFF   |
    | innodb_optimize_fulltext_only | OFF   |
    +-------------------------------+-------+
    2 rows in set (0.001 sec)
    
    SET SESSION alter_algorithm='INSTANT';
    OPTIMIZE TABLE tab;
    +---------+----------+----------+------------------------------------------------------------------------------+
    | Table   | Op       | Msg_type | Msg_text                                                                     |
    +---------+----------+----------+------------------------------------------------------------------------------+
    | db1.tab | optimize | note     | Table does not support optimize, doing recreate + analyze instead            |
    | db1.tab | optimize | error    | ALGORITHM=INSTANT is not supported for this operation. Try ALGORITHM=INPLACE |
    | db1.tab | optimize | status   | Operation failed                                                             |
    +---------+----------+----------+------------------------------------------------------------------------------+
    3 rows in set, 1 warning (0.002 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    ALTER TABLE tab RENAME TO old_tab;
    Query OK, 0 rows affected (0.008 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INSTANT';
    RENAME TABLE tab TO old_tab;
    Query OK, 0 rows affected (0.008 sec)
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab FORCE;
    Query OK, 0 rows affected (0.008 sec)
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab FORCE;
    Query OK, 0 rows affected (0.008 sec)
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ADD COLUMN c VARCHAR(50), FORCE;
    .

    Therefore, when the ALGORITHM clause is set to INPLACE, some operations are supported by inheritance. See the following additional pages for more information about these supported operations:

    • InnoDB Online DDL Operations with ALGORITHM=NOCOPY

    • InnoDB Online DDL Operations with ALGORITHM=INSTANT

    Column Operations

    ALTER TABLE ... ADD COLUMN

    InnoDB supports adding columns to a table with ALGORITHM set to INPLACE.

    The table is rebuilt, which means that all of the data is reorganized substantially, and the indexes are rebuilt. As a result, the operation is quite expensive.

    With the exception of adding an auto-increment column, this operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    This applies to ALTER TABLE ... ADD COLUMN for InnoDB tables.

    ALTER TABLE ... DROP COLUMN

    InnoDB supports dropping columns from a table with ALGORITHM set to INPLACE.

    The table is rebuilt, which means that all of the data is reorganized substantially, and the indexes are rebuilt. As a result, the operation is quite expensive.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    This applies to ALTER TABLE ... DROP COLUMN for InnoDB tables.

    ALTER TABLE ... MODIFY COLUMN

    This applies to ALTER TABLE ... MODIFY COLUMN for InnoDB tables.

    Reordering Columns

    InnoDB supports reordering columns within a table with ALGORITHM set to INPLACE.

    The table is rebuilt, which means that all of the data is reorganized substantially, and the indexes are rebuilt. As a result, the operation is quite expensive.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    Changing the Data Type of a Column

    InnoDB does not support modifying a column's data type with ALGORITHM set to INPLACE in most cases. There are some exceptions:

    • In and later, InnoDB supports increasing the length of VARCHAR columns with ALGORITHM set to INPLACE, unless it would require changing the number of bytes requires to represent the column's length. A VARCHAR column that is between 0 and 255 bytes in size requires 1 byte to represent its length, while a VARCHAR column that is 256 bytes or longer requires 2 bytes to represent its length. This means that the length of a column cannot be increased with ALGORITHM set to INPLACE if the original length was less than 256 bytes, and the new length is 256 bytes or more.

    • In and later, InnoDB supports increasing the length of VARCHAR columns with set to INPLACE in the cases where the operation supports having the clause set to INSTANT.

    See InnoDB Online DDL Operations with ALGORITHM=INSTANT: Changing the Data Type of a Column for more information.

    For example, this fails:

    But this succeeds in and later, because the original length of the column is less than 256 bytes, and the new length is still less than 256 bytes:

    But this fails in and later, because the original length of the column is less than 256 bytes, and the new length is greater than 256 bytes:

    Changing a Column to NULL

    InnoDB supports modifying a column to allow NULL values with ALGORITHM set to INPLACE.

    The table is rebuilt, which means that all of the data is reorganized substantially, and the indexes are rebuilt. As a result, the operation is quite expensive.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    Changing a Column to NOT NULL

    InnoDB supports modifying a column to not allow NULL values with ALGORITHM set to INPLACE. It is required for strict mode to be enabled in SQL_MODE. The operation will fail if the column contains any NULL values. Changes that would interfere with referential integrity are also not permitted.

    The table is rebuilt, which means that all of the data is reorganized substantially, and the indexes are rebuilt. As a result, the operation is quite expensive.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    Adding a New ENUM Option

    InnoDB supports adding a new ENUM option to a column with ALGORITHM set to INPLACE. In order to add a new ENUM option with ALGORITHM set to INPLACE, the following requirements must be met:

    • It must be added to the end of the list.

    • The storage requirements must not change.

    This operation only changes the table's metadata, so the table does not have to be rebuilt..

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example, this succeeds:

    But this fails:

    Adding a New SET Option

    InnoDB supports adding a new SET option to a column with ALGORITHM set to INPLACE. In order to add a new SET option with ALGORITHM set to INPLACE, the following requirements must be met:

    • It must be added to the end of the list.

    • The storage requirements must not change.

    This operation only changes the table's metadata, so the table does not have to be rebuilt..

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example, this succeeds:

    But this fails:

    Removing System Versioning from a Column

    In and later, InnoDB supports removing system versioning from a column with ALGORITHM set to INPLACE. In order for this to work, the system_versioning_alter_history system variable must be set to KEEP. See MDEV-16330 for more information.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    ALTER TABLE ... ALTER COLUMN

    This applies to ALTER TABLE ... ALTER COLUMN for InnoDB tables.

    Setting a Column's Default Value

    InnoDB supports modifying a column's DEFAULT value with ALGORITHM set to INPLACE.

    This operation only changes the table's metadata, so the table does not have to be rebuilt.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted. For example:

    Removing a Column's Default Value

    InnoDB supports removing a column's DEFAULT value with ALGORITHM set to INPLACE.

    This operation only changes the table's metadata, so the table does not have to be rebuilt.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    ALTER TABLE ... CHANGE COLUMN

    InnoDB supports renaming a column with ALGORITHM set to INPLACE, unless the column's data type or attributes changed in addition to the name.

    This operation only changes the table's metadata, so the table does not have to be rebuilt.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example, this succeeds:

    But this fails:

    This applies to ALTER TABLE ... CHANGE COLUMN for InnoDB tables.

    Index Operations

    ALTER TABLE ... ADD PRIMARY KEY

    InnoDB supports adding a primary key to a table with ALGORITHM set to INPLACE.

    If the new primary key column is not defined as NOT NULL, then it is highly recommended for strict mode to be enabled in SQL_MODE. Otherwise, NULL values are silently converted to the default value for the given data type, which is probably not the desired behavior in this scenario.

    The table is rebuilt, which means that all of the data is reorganized substantially, and the indexes are rebuilt. As a result, the operation is quite expensive.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example, this succeeds:

    But this fails:

    And this fails:

    This applies to ALTER TABLE ... ADD PRIMARY KEY for InnoDB tables.

    ALTER TABLE ... DROP PRIMARY KEY

    InnoDB does not support dropping a primary key with ALGORITHM set to INPLACE in most cases.

    If you try to do so, then you will see an error. InnoDB only supports this operation with ALGORITHM set to COPY. Concurrent DML is not permitted.

    However, there is an exception. If you are dropping a primary key, and adding a new one at the same time, then that operation can be performed with ALGORITHM set to INPLACE. This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example, this fails:

    But this succeeds:

    This applies to ALTER TABLE ... DROP PRIMARY KEY for InnoDB tables.

    ALTER TABLE ... ADD INDEX and CREATE INDEX

    This applies to ALTER TABLE ... ADD INDEX and CREATE INDEX for InnoDB tables.

    Adding a Plain Index

    InnoDB supports adding a plain index to a table with ALGORITHM set to INPLACE. The table is not rebuilt.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example, this succeeds:

    And this succeeds:

    Adding a Fulltext Index

    InnoDB supports adding a FULLTEXT index to a table with ALGORITHM set to INPLACE. The table is not rebuilt in some cases.

    However, there are some limitations, such as:

    • Adding a FULLTEXT index to a table that does not have a user-defined FTS_DOC_ID column will require the table to be rebuilt once. When the table is rebuilt, the system adds a hidden FTS_DOC_ID column. From that point forward, adding additional FULLTEXT indexes to the same table will not require the table to be rebuilt when ALGORITHM is set to INPLACE.

    • Only one FULLTEXT index may be added at a time when ALGORITHM is set to INPLACE.

    • If a table has more than one index, then it cannot be rebuilt by any operations when is set to INPLACE.

    • If a table has a index, then it cannot be rebuilt by any operations when the clause is set to NONE.

    This operation supports a read-only locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to SHARED. When this strategy is used, read-only concurrent DML is permitted.

    For example, this succeeds, but requires the table to be rebuilt, so that the hidden FTS_DOC_ID column can be added:

    And this succeeds in the same way as above:

    And this succeeds, and the second command does not require the table to be rebuilt:

    But this second command fails, because only one FULLTEXT index can be added at a time:

    And this third command fails, because a table cannot be rebuilt when it has more than one FULLTEXT index:

    Adding a Spatial Index

    InnoDB supports adding a SPATIAL index to a table with ALGORITHM set to INPLACE.

    However, there are some limitations, such as:

    • If a table has a SPATIAL index, then it cannot be rebuilt by any ALTER TABLE operations when the LOCK clause is set to NONE.

    This operation supports a read-only locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to SHARED. When this strategy is used, read-only concurrent DML is permitted.

    For example, this succeeds:

    And this succeeds in the same way as above:

    ALTER TABLE ... DROP INDEX and DROP INDEX

    InnoDB supports dropping indexes from a table with ALGORITHM set to INPLACE.

    This operation only changes the table's metadata, so the table does not have to be rebuilt.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example, this succeeds:

    And this succeeds:

    This applies to ALTER TABLE ... DROP INDEX and DROP INDEX for InnoDB tables.

    ALTER TABLE ... ADD FOREIGN KEY

    InnoDB supports adding foreign key constraints to a table with ALGORITHM set to INPLACE. In order to add a new foreign key constraint to a table with ALGORITHM set to INPLACE, the foreign_key_checks system variable needs to be set to OFF. If it is set to ON, then ALGORITHM=COPY is required.

    This operation only changes the table's metadata, so the table does not have to be rebuilt.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example, this fails:

    But this succeeds:

    This applies to ALTER TABLE ... ADD FOREIGN KEY for InnoDB tables.

    ALTER TABLE ... DROP FOREIGN KEY

    InnoDB supports dropping foreign key constraints from a table with ALGORITHM set to INPLACE.

    This operation only changes the table's metadata, so the table does not have to be rebuilt.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    This applies to ALTER TABLE ... DROP FOREIGN KEY for InnoDB tables.

    Table Operations

    ALTER TABLE ... AUTO_INCREMENT=...

    InnoDB supports changing a table's AUTO_INCREMENT value with ALGORITHM set to INPLACE. This operation should finish instantly. The table is not rebuilt.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    This applies to ALTER TABLE ... AUTO_INCREMENT=... for InnoDB tables.

    ALTER TABLE ... ROW_FORMAT=...

    InnoDB supports changing a table's row format with ALGORITHM set to INPLACE.

    The table is rebuilt, which means that all of the data is reorganized substantially, and the indexes are rebuilt. As a result, the operation is quite expensive.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    This applies to ALTER TABLE ... ROW_FORMAT=... for InnoDB tables.

    ALTER TABLE ... KEY_BLOCK_SIZE=...

    InnoDB supports changing a table's KEY_BLOCK_SIZE with ALGORITHM set to INPLACE.

    The table is rebuilt, which means that all of the data is reorganized substantially, and the indexes are rebuilt. As a result, the operation is quite expensive.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    This applies to KEY_BLOCK_SIZE=... for InnoDB tables.

    ALTER TABLE ... PAGE_COMPRESSED=... and ALTER TABLE ... PAGE_COMPRESSION_LEVEL=...

    In and later, InnoDB supports setting a table's PAGE_COMPRESSED value to 1 with ALGORITHM set to INPLACE. InnoDB also supports changing a table's PAGE_COMPRESSED value from 1 to 0 with ALGORITHM set to INPLACE.

    In these versions, InnoDB also supports changing a table's PAGE_COMPRESSION_LEVEL value with ALGORITHM set to INPLACE.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    See MDEV-16328 for more information.

    For example, this succeeds:

    And this succeeds:

    And this succeeds:

    This applies to PAGE_COMPRESSED=... and PAGE_COMPRESSION_LEVEL=... for InnoDB tables.

    ALTER TABLE ... DROP SYSTEM VERSIONING

    InnoDB supports dropping system versioning from a table with ALGORITHM set to INPLACE.

    This operation supports the read-only locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to SHARED. When this strategy is used, read-only concurrent DML is permitted.

    For example:

    This applies to ALTER TABLE ... DROP SYSTEM VERSIONING for InnoDB tables.

    ALTER TABLE ... DROP CONSTRAINT

    In and later, InnoDB supports dropping a CHECK constraint from a table with ALGORITHM set to INPLACE. See MDEV-16331 for more information.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    This applies to ALTER TABLE ... DROP CONSTRAINT for InnoDB tables.

    ALTER TABLE ... FORCE

    InnoDB supports forcing a table rebuild with ALGORITHM set to INPLACE.

    The table is rebuilt, which means that all of the data is reorganized substantially, and the indexes are rebuilt. As a result, the operation is quite expensive.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    This applies to ALTER TABLE ... FORCE for InnoDB tables.

    ALTER TABLE ... ENGINE=InnoDB

    InnoDB supports forcing a table rebuild with ALGORITHM set to INPLACE.

    The table is rebuilt, which means that all of the data is reorganized substantially, and the indexes are rebuilt. As a result, the operation is quite expensive.

    This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to NONE. When this strategy is used, all concurrent DML is permitted.

    For example:

    This applies to ALTER TABLE ... ENGINE=InnoDB for InnoDB tables.

    OPTIMIZE TABLE ...

    InnoDB supports optimizing a table with ALGORITHM set to INPLACE.

    If the innodb_defragment system variable is set to OFF, and if the innodb_optimize_fulltext_only system variable is also set to OFF, then OPTIMIZE TABLE are equivalent to ALTER TABLE … FORCE.

    The table is rebuilt, which means that all of the data is reorganized substantially, and the indexes are rebuilt. As a result, the operation is quite expensive.

    If either of the previously mentioned system variables is set to ON, then OPTIMIZE TABLE will optimize some data without rebuilding the table. However, the file size will not be reduced.

    For example, this succeeds:

    And this succeeds, but the table is not rebuilt:

    This applies to OPTIMIZE TABLE for InnoDB tables.

    ALTER TABLE ... RENAME TO and RENAME TABLE ...

    InnoDB supports renaming a table with ALGORITHM set to INPLACE.

    This operation only changes the table's metadata, so the table does not have to be rebuilt.

    This operation supports the exclusive locking strategy. This strategy can be explicitly chosen by setting the LOCK clause to EXCLUSIVE. When this strategy is used, concurrent DML is not permitted.

    For example, this succeeds:

    And this succeeds:

    This applies to ALTER TABLE ... RENAME TO and RENAME TABLE for InnoDB tables.

    Limitations

    Limitations Related to Fulltext Indexes

    • If a table has more than one FULLTEXT index, then it cannot be rebuilt by any ALTER TABLE operations when ALGORITHM is set to INPLACE.

    • If a table has a FULLTEXT index, then it cannot be rebuilt by any ALTER TABLE operations when the LOCK clause is set to NONE.

    Limitations Related to Spatial Indexes

    • If a table has a SPATIAL index, then it cannot be rebuilt by any ALTER TABLE operations when the LOCK clause is set to NONE.

    Limitations Related to Generated (Virtual and Persistent/Stored) Columns

    Generated columns do not currently support online DDL for all of the same operations that are supported for "real" columns.

    See Generated (Virtual and Persistent/Stored) Columns: Statement Support for more information on the limitations.

    This page is licensed: CC BY-SA / Gnu FDL

    ALGORITHM
    ALGORITHM
    ALGORITHM
    ALGORITHM
    ALGORITHM
    ALGORITHM
    ALGORITHM
    MDEV-15563
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ADD COLUMN c VARCHAR(50);
    Query OK, 0 rows affected (0.006 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab DROP COLUMN c;
    Query OK, 0 rows affected (0.021 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(50) AFTER a;
    Query OK, 0 rows affected (0.022 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab MODIFY COLUMN c INT;
    ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    ) CHARACTER SET=latin1;
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(100);
    Query OK, 0 rows affected (0.005 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(255)
    ) CHARACTER SET=latin1;
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(256);
    ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50) NOT NULL
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(50) NULL;
    Query OK, 0 rows affected (0.021 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(50) NOT NULL;
    Query OK, 0 rows affected (0.021 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c ENUM('red', 'green')
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab MODIFY COLUMN c ENUM('red', 'green', 'blue');
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c ENUM('red', 'green')
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab MODIFY COLUMN c ENUM('red', 'blue', 'green');
    ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c SET('red', 'green')
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab MODIFY COLUMN c SET('red', 'green', 'blue');
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c SET('red', 'green')
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab MODIFY COLUMN c SET('red', 'blue', 'green');
    ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50) WITH SYSTEM VERSIONING
    );
    
    SET SESSION system_versioning_alter_history='KEEP';
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab MODIFY COLUMN c VARCHAR(50) WITHOUT SYSTEM VERSIONING;
    Query OK, 0 rows affected (0.005 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ALTER COLUMN c SET DEFAULT 'NO value explicitly provided.';
    Query OK, 0 rows affected (0.005 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50) DEFAULT 'NO value explicitly provided.'
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ALTER COLUMN c DROP DEFAULT;
    Query OK, 0 rows affected (0.005 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab CHANGE COLUMN c str VARCHAR(50);
    Query OK, 0 rows affected (0.006 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab CHANGE COLUMN c num INT;
    ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION sql_mode='STRICT_TRANS_TABLES';
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ADD PRIMARY KEY (a);
    Query OK, 0 rows affected (0.021 sec)
    CREATE OR REPLACE TABLE tab (
       a INT,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    INSERT INTO tab VALUES (NULL, NULL, NULL);
    
    SET SESSION sql_mode='STRICT_TRANS_TABLES';
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ADD PRIMARY KEY (a);
    ERROR 1265 (01000): Data truncated for column 'a' at row 1
    CREATE OR REPLACE TABLE tab (
       a INT,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    INSERT INTO tab VALUES (1, NULL, NULL);
    INSERT INTO tab VALUES (1, NULL, NULL);
    
    SET SESSION sql_mode='STRICT_TRANS_TABLES';
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ADD PRIMARY KEY (a);
    ERROR 1062 (23000): Duplicate entry '1' for key 'PRIMARY'
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab DROP PRIMARY KEY;
    ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: Dropping a primary key is not allowed without also adding a new primary key. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION sql_mode='STRICT_TRANS_TABLES';
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab DROP PRIMARY KEY, ADD PRIMARY KEY (b);
    Query OK, 0 rows affected (0.020 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ADD INDEX b_index (b);
    Query OK, 0 rows affected (0.010 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    CREATE INDEX b_index ON tab (b);
    Query OK, 0 rows affected (0.011 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ADD FULLTEXT INDEX b_index (b);
    Query OK, 0 rows affected (0.055 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    CREATE FULLTEXT INDEX b_index ON tab (b);
    Query OK, 0 rows affected (0.041 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ADD FULLTEXT INDEX b_index (b);
    Query OK, 0 rows affected (0.043 sec)
    
    ALTER TABLE tab ADD FULLTEXT INDEX c_index (c);
    Query OK, 0 rows affected (0.017 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50),
       d VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ADD FULLTEXT INDEX b_index (b);
    Query OK, 0 rows affected (0.041 sec)
    
    ALTER TABLE tab ADD FULLTEXT INDEX c_index (c), ADD FULLTEXT INDEX d_index (d);
    ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: InnoDB presently supports one FULLTEXT index creation at a time. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ADD FULLTEXT INDEX b_index (b);
    Query OK, 0 rows affected (0.040 sec)
    
    ALTER TABLE tab ADD FULLTEXT INDEX c_index (c);
    Query OK, 0 rows affected (0.015 sec)
    
    ALTER TABLE tab FORCE;
    ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: InnoDB presently supports one FULLTEXT index creation at a time. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c GEOMETRY NOT NULL
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ADD SPATIAL INDEX c_index (c);
    Query OK, 0 rows affected (0.006 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c GEOMETRY NOT NULL
    );
    
    SET SESSION alter_algorithm='INPLACE';
    CREATE SPATIAL INDEX c_index ON tab (c);
    Query OK, 0 rows affected (0.006 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50),
       INDEX b_index (b)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab DROP INDEX b_index;
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50),
       INDEX b_index (b)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    DROP INDEX b_index ON tab;
    CREATE OR REPLACE TABLE tab1 (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50),
       d INT
    );
    
    CREATE OR REPLACE TABLE tab2 (
       a INT PRIMARY KEY,
       b VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab1 ADD FOREIGN KEY tab2_fk (d) REFERENCES tab2 (a);
    ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: Adding foreign keys needs foreign_key_checks=OFF. Try ALGORITHM=COPY
    CREATE OR REPLACE TABLE tab1 (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50),
       d INT
    );
    
    CREATE OR REPLACE TABLE tab2 (
       a INT PRIMARY KEY,
       b VARCHAR(50)
    );
    
    SET SESSION foreign_key_checks=OFF;
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab1 ADD FOREIGN KEY tab2_fk (d) REFERENCES tab2 (a);
    Query OK, 0 rows affected (0.011 sec)
    CREATE OR REPLACE TABLE tab2 (
       a INT PRIMARY KEY,
       b VARCHAR(50)
    );
    
    CREATE OR REPLACE TABLE tab1 (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50),
       d INT,
       FOREIGN KEY tab2_fk (d) REFERENCES tab2 (a)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab1 DROP FOREIGN KEY tab2_fk;
    Query OK, 0 rows affected (0.005 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab AUTO_INCREMENT=100;
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    ) ROW_FORMAT=DYNAMIC;
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ROW_FORMAT=COMPRESSED;
    Query OK, 0 rows affected (0.025 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    ) ROW_FORMAT=COMPRESSED
      KEY_BLOCK_SIZE=4;
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab KEY_BLOCK_SIZE=2;
    Query OK, 0 rows affected (0.021 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab PAGE_COMPRESSED=1;
    Query OK, 0 rows affected (0.006 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    ) PAGE_COMPRESSED=1;
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab PAGE_COMPRESSED=0;
    Query OK, 0 rows affected (0.020 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    ) PAGE_COMPRESSED=1
      PAGE_COMPRESSION_LEVEL=5;
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab PAGE_COMPRESSION_LEVEL=4;
    Query OK, 0 rows affected (0.006 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    ) WITH SYSTEM VERSIONING;
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab DROP SYSTEM VERSIONING;
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50),
       CONSTRAINT b_not_empty CHECK (b != '')
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab DROP CONSTRAINT b_not_empty;
    Query OK, 0 rows affected (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab FORCE;
    Query OK, 0 rows affected (0.022 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab ENGINE=InnoDB;
    Query OK, 0 rows affected (0.022 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SHOW GLOBAL VARIABLES WHERE Variable_name IN('innodb_defragment', 'innodb_optimize_fulltext_only');
    +-------------------------------+-------+
    | Variable_name                 | Value |
    +-------------------------------+-------+
    | innodb_defragment             | OFF   |
    | innodb_optimize_fulltext_only | OFF   |
    +-------------------------------+-------+
    
    SET SESSION alter_algorithm='INPLACE';
    OPTIMIZE TABLE tab;
    +---------+----------+----------+-------------------------------------------------------------------+
    | Table   | Op       | Msg_type | Msg_text                                                          |
    +---------+----------+----------+-------------------------------------------------------------------+
    | db1.tab | optimize | note     | Table does not support optimize, doing recreate + analyze instead |
    | db1.tab | optimize | status   | OK                                                                |
    +---------+----------+----------+-------------------------------------------------------------------+
    2 rows in set (0.026 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET GLOBAL innodb_defragment=ON;
    SHOW GLOBAL VARIABLES WHERE Variable_name IN('innodb_defragment', 'innodb_optimize_fulltext_only');
    +-------------------------------+-------+
    | Variable_name                 | Value |
    +-------------------------------+-------+
    | innodb_defragment             | ON    |
    | innodb_optimize_fulltext_only | OFF   |
    +-------------------------------+-------+
    
    SET SESSION alter_algorithm='INPLACE';
    OPTIMIZE TABLE tab;
    +---------+----------+----------+----------+
    | Table   | Op       | Msg_type | Msg_text |
    +---------+----------+----------+----------+
    | db1.tab | optimize | status   | OK       |
    +---------+----------+----------+----------+
    1 row in set (0.004 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    ALTER TABLE tab RENAME TO old_tab;
    Query OK, 0 rows affected (0.011 sec)
    CREATE OR REPLACE TABLE tab (
       a INT PRIMARY KEY,
       b VARCHAR(50),
       c VARCHAR(50)
    );
    
    SET SESSION alter_algorithm='INPLACE';
    RENAME TABLE tab TO old_tab;
    ALGORITHM
    ALGORITHM
    FULLTEXT
    ALTER TABLE
    ALGORITHM
    FULLTEXT
    ALTER TABLE
    LOCK

    CONNECT PIVOT Table Type

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    This table type can be used to transform the result of another table or view (called the source table) into a pivoted table along “pivot” and “facts” columns. A pivot table is a great reporting tool that sorts and sums (by default) independent of the original data layout in the source table.

    For example, let us suppose you have the following “Expenses” table:

    Who
    Week
    What
    Amount

    Pivoting the table contents using the 'Who' and 'Week' fields for the left columns, and the 'What' field for the top heading and summing the 'Amount' fields for each cell in the new table, gives the following desired result:

    Who
    Week
    Beer
    Car
    Food

    Note that SQL enables you to get the same result presented differently by using the “group by” clause, namely:

    However there is no way to get the pivoted layout shown above just using SQL. Even using embedded SQL programming for some DBMS is not quite simple and automatic.

    The Pivot table type of CONNECT makes doing this much simpler.

    Using the PIVOT Tables Type

    To get the result shown in the example above, just define it as a new table with the statement:

    You can now use it as any other table, for instance to display the result shown above, just say:

    The CONNECT implementation of the PIVOT table type does much of the work required to transform the source table:

    1. Finding the “Facts” column, by default the last column of the source table. Finding “Facts” or “Pivot” columns work only for table based pivot tables. They do not for view or srcdef based pivot tables, for which they must be explicitly specified.

    2. Finding the “Pivot” column, by default the last remaining column.

    3. Choosing the aggregate function to use, “SUM” by default.

    4. Constructing and executing the “Group By” on the “Facts” column, getting its result in memory.

    The source table “Pivot” column must not be nullable (there are no such things as a “null” column) The creation are refused even is this nullable column actually does not contain null values.

    If a different result is desired, Create Table options are available to change the defaults used by Pivot. For instance if we want to display the average expense for each person and product, spread in columns for each week, use the following statement:

    Now saying:

    Will display the resulting table:

    Who
    What
    3
    4
    5

    Restricting the Columns in a Pivot Table

    Let us suppose that we want a Pivot table from expenses summing the expenses for all people and products whatever week it was bought. We can do this just by removing from the pivex table the week column from the column list.

    The result we get from the new table is:

    Who
    Beer
    Car
    Food

    Note: Restricting columns is also needed when the source table contains extra columns that should not be part of the pivot table. This is true in particular for key columns that prevent a proper grouping.

    PIVOT Create Table Syntax

    The Create Table statement for PIVOT tables uses the following syntax:

    The column definition has two sets of columns:

    1. A set of columns belonging to the source table, not including the “facts” and “pivot” columns.

    2. “Data” columns receiving the values of the aggregated “facts” columns named from the values of the “pivot” column. They are indicated by the “flag” option.

    The options and sub-options available for Pivot tables are:

    Option
    Type
    Description
    • : These options must be specified in the OPTION_LIST.

    Additional Access Options

    There are four cases where pivot must call the server containing the source table or on which the SrcDef statement must be executed:

    1. The source table is not a CONNECT table.

    2. The SrcDef option is specified.

    3. The source table is on another server.

    4. The columns are not specified.

    By default, pivot tries to call the currently used server using host=localhost, user=root not using password, and port=3306. However, this may not be what is needed, in particular if the local root user has a password in which case you can get an “access denied” error message when creating or using the pivot table.

    Specify the host, user, password and/or port options in the option_list to override the default connection options used to access the source table, get column specifications, execute the generated group by or SrcDef query.

    Defining a Pivot Table

    There are principally two ways to define a PIVOT table:

    1. From an existing table or view.

    2. Directly giving the SQL statement returning the result to pivot.

    Defining a Pivot Table from a Source Table

    The tabname standard table option is used to give the name of the source table or view.

    For tables, the internal Group By are internally generated, except when the GROUPBY option is specified as true. Do it only when the table or view has a valid GROUP BY format.

    Directly Defining the Source of a Pivot Table in SQL

    Alternatively, the internal source can be directly defined using the SrcDef option that must have the proper group by format.

    As we have seen above, a proper Pivot Table is made from an internal intermediate table resulting from the execution of a GROUP BY statement. In many cases, it is simpler or desirable to directly specify this when creating the pivot table. This may be because the source is the result of a complex process including filtering and/or joining tables.

    To do this, use the SrcDef option, often replacing all other options. For instance, suppose that in the first example we are only interested in weeks 4 and 5. We could of course display it by:

    However, what if this table is a huge table? In this case, the correct way to do it is to define the pivot table as this:

    If your source table has millions of records and you plan to pivot only a small subset of it, doing so will make a lot of a difference performance wise. In addition, you have entire liberty to use expressions, scalar functions, aliases, join, where and having clauses in your SQL statement. The only constraint is that you are responsible for the result of this statement to have the correct format for the pivot processing.

    Using SrcDef also permits to use expressions and/or scalar functions. For instance:

    Now the statement:

    Will display the result:

    Who
    What
    First
    Middle
    Last

    Note 1: to avoid multiple lines having the same fixed column values, it is mandatory in SrcDef to place the pivot column at the end of the group by list.

    Note 2: in the create statement SrcDef, it is mandatory to give aliasesto the columns containing expressions so they are recognized by the other options.

    Note 3: in the SrcDef select statement, quotes must be escaped because the entire statement is passed to MariaDB between quotes. Alternatively, specify it between double quotes.

    Note 4: We could have left CONNECT do the column definitions. However, because they are defined from the sorted names, the Middle column had been placed at the end of them.

    Specifying the Columns Corresponding to the Pivot Column

    These columns must be named from the values existing in the “pivot” column. For instance, supposing we have the following pet table:

    name
    race
    number

    Pivoting it using race as the pivot column is done with:

    This gives the result:

    name
    dog
    cat
    rabbit
    bird
    fish

    By the way, does this ring a bell? It shows that in a way PIVOT tables are doing the opposite of what OCCUR tables do.

    We can alternatively define specifically the table columns but what happens if the Pivot column contains values that is not matching a “data” column? There are three cases depending on the specified options and flags.

    First case: If no specific options are specified, this is an error an when trying to display the table. The query will abort with an error message stating that a non-matching value was met. Note that because the column list is established when creating the table, this is prone to occur if some rows containing new values for the pivot column are inserted in the source table. If this happens, you should re-create the table or manually add the new columns to the pivot table.

    Second case: The accept option was specified. For instance:

    No error are raised and the non-matching values are ignored. This table are displayed as:

    name
    dog
    cat

    Third case: A “dump” column was specified with the flag value equal to 2. All non-matching values are added in this column. For instance:

    This table are displayed as:

    name
    dog
    cat
    other

    It is a good idea to provide such a “dump” column if the source table is prone to be inserted new rows that can have a value for the pivot column that did not exist when the pivot table was created.

    Pivoting Big Source Tables

    This may sometimes be risky. If the pivot column contains too many distinct values, the resulting table may have too many columns. In all cases the process involved, finding distinct values when creating the table or doing the group by when using it, can be very long and sometimes can fail because of exhausted memory.

    Restrictions by a where clause should be applied to the source table when creating the pivot table rather than to the pivot table itself. This can be done by creating an intermediate table or using as source a view or a srcdef option.

    All PIVOT tables are read only.

    This page is licensed: CC BY-SA / Gnu FDL

    CONNECT XML Table Type

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Overview

    CONNECT supports tables represented by XML files. For these tables, the standard input/output functions of the operating system are not used but the parsing and processing of the file is delegated to a specialized library. Currently two such systems are supported: libxml2, a part of the GNOME framework, but which does not require GNOME and, on Windows, MS-DOM (DOMDOC), the Microsoft standard support of XML documents.

    Beer

    19.00

    Janet

    5

    Car

    12.00

    Joe

    3

    Food

    19.00

    Beth

    4

    Beer

    15.00

    Janet

    5

    Beer

    19.00

    Joe

    3

    Car

    20.00

    Joe

    4

    Beer

    16.00

    Beth

    5

    Food

    12.00

    Beth

    3

    Beer

    16.00

    Joe

    4

    Food

    17.00

    Joe

    5

    Beer

    14.00

    Janet

    3

    Car

    19.00

    Joe

    4

    Food

    17.00

    Beth

    5

    Beer

    20.00

    Janet

    3

    Food

    18.00

    Joe

    4

    Beer

    14.00

    Joe

    5

    Food

    12.00

    Janet

    3

    Beer

    18.00

    Janet

    4

    Car

    17.00

    Janet

    5

    Food

    12.00

    5

    20.00

    0.00

    12.00

    Janet

    3

    18.00

    19.00

    18.00

    Janet

    4

    0.00

    17.00

    0.00

    Janet

    5

    33.00

    12.00

    12.00

    Joe

    3

    18.00

    20.00

    31.00

    Joe

    4

    49.00

    0.00

    34.00

    Joe

    5

    14.00

    0.00

    12.00

    Getting all the distinct values in the “Pivot” column and defining a “Data” column for each.

  • Spreading the result of the intermediate memory table into the final table.

  • Beer

    18.00

    0.00

    16.50

    Janet

    Car

    19.00

    17.00

    12.00

    Janet

    Food

    18.00

    0.00

    12.00

    Joe

    Beer

    18.00

    16.33

    14.00

    Joe

    Car

    20.00

    0.00

    0.00

    Joe

    Food

    15.50

    17.00

    12.00

    PivotCol*

    name

    Specifies the name of the Pivot column whose values are used to fill the “data” columns having the flag option.

    FncCol*

    [func(]name[)]

    Specifies the name of the data “Facts” column. If the form func(name) is used, the aggregate function name is set to func.

    Groupby*

    Boolean

    Set it to True (1 or Yes) if the table already has a GROUP BY format.

    Accept*

    Boolean

    To accept non matching Pivot column values.

    Beer

    118.08

    0.00

    216.48

    Janet

    Car

    124.64

    111.52

    78.72

    Janet

    Food

    118.08

    0.00

    78.72

    Joe

    Beer

    118.08

    321.44

    91.84

    Joe

    Car

    131.20

    0.00

    0.00

    Joe

    Food

    203.36

    223.04

    78.72

    Lisbeth

    rabbit

    2

    Kevin

    cat

    2

    Kevin

    bird

    6

    Donald

    dog

    1

    Donald

    fish

    3

    0

    0

    Mary

    1

    1

    0

    0

    0

    Lisbeth

    0

    0

    2

    0

    0

    Kevin

    0

    2

    0

    6

    0

    Donald

    1

    0

    0

    0

    3

    Kevin

    0

    2

    Donald

    1

    0

    Lisbeth

    0

    0

    2

    Kevin

    0

    2

    6

    Donald

    1

    0

    3

    Joe

    3

    Beer

    18.00

    Beth

    4

    Food

    17.00

    Janet

    5

    Beer

    14.00

    Joe

    3

    Food

    12.00

    Joe

    Beth

    3

    16.00

    0.00

    0.00

    Beth

    4

    15.00

    0.00

    17.00

    Beth

    Beer

    16.00

    15.00

    20.00

    Beth

    Food

    0.00

    17.00

    12.00

    Beth

    51.00

    0.00

    29.00

    Janet

    51.00

    48.00

    30.00

    Joe

    81.00

    20.00

    Tabname

    [DB.]Name

    The name of the table to “pivot”. If not set SrcDef must be specified.

    SrcDef

    SQL_statement

    The statement used to generate the intermediate mysql table.

    DBname

    name

    The name of the database containing the source table. Defaults to the current database.

    Function*

    name

    The name of the aggregate function used for the data columns, SUM by default.

    Beth

    Beer

    104.96

    98.40

    131.20

    Beth

    Food

    0.00

    111.52

    78.72

    John

    dog

    2

    Bill

    cat

    1

    Mary

    dog

    1

    Mary

    cat

    1

    John

    2

    0

    0

    0

    0

    Bill

    0

    1

    John

    2

    0

    Bill

    0

    1

    Mary

    1

    1

    Lisbeth

    0

    0

    John

    2

    0

    0

    Bill

    0

    1

    0

    Mary

    1

    1

    4

    Beth

    Janet

    77.00

    Janet

    0

    0

    SELECT who, week, what, SUM(amount) FROM expenses
           GROUP BY who, week, what;
    CREATE TABLE pivex
    ENGINE=connect table_type=pivot tabname=expenses;
    SELECT * FROM pivex;
    CREATE TABLE pivex2
    ENGINE=connect table_type=pivot tabname=expenses
    option_list='PivotCol=Week,Function=AVG';
    SELECT * FROM pivex2;
    ALTER TABLE pivex DROP COLUMN week;
    CREATE TABLE pivot_table_name
    [(column_definition)]
    ENGINE=CONNECT table_type=PIVOT
    {tabname='source_table_name' | srcdef='source_table_def'}
    [option_list='pivot_table_option_list'];
    SELECT * FROM pivex WHERE week IN (4,5);
    CREATE TABLE pivex4
    ENGINE=connect table_type=pivot
    option_list='PivotCol=what,FncCol=amount'
    SrcDef='select who, week, what, sum(amount) from expenses
    where week in (4,5) group by who, week, what';
    CREATE TABLE xpivot (
    Who CHAR(10) NOT NULL,
    What CHAR(12) NOT NULL,
    First DOUBLE(8,2) flag=1,
    Middle DOUBLE(8,2) flag=1,
    Last DOUBLE(8,2) flag=1)
    ENGINE=connect table_type=PIVOT
    option_list='PivotCol=wk,FncCol=amnt'
    Srcdef='select who, what, case when week=3 then ''First'' when
    week=5 then ''Last'' else ''Middle'' end as wk, sum(amount) *
    6.56 as amnt from expenses group by who, what, wk';
    SELECT * FROM xpivot;
    CREATE TABLE pivet
    ENGINE=connect table_type=pivot tabname=pet
    option_list='PivotCol=race,groupby=1';
    CREATE TABLE xpivet2 (
    name VARCHAR(12) NOT NULL,
    dog INT NOT NULL DEFAULT 0 flag=1,
    cat INT NOT NULL DEFAULT 0 flag=1)
    ENGINE=connect table_type=pivot tabname=pet
    option_list='PivotCol=race,groupby=1,Accept=1';
    CREATE TABLE xpivet (
    name VARCHAR(12) NOT NULL,
    dog INT NOT NULL DEFAULT 0 flag=1,
    cat INT NOT NULL DEFAULT 0 flag=1,
    other INT NOT NULL DEFAULT 0 flag=2)
    ENGINE=connect table_type=pivot tabname=pet
    option_list='PivotCol=race,groupby=1';

    DOMDOC is the default for the Windows version of CONNECT and libxml2 is always used on other systems. On Windows the choice can be specified using the XMLSUP CREATE TABLE list option, for instance specifyingoption_list='xmlsup=libxml2'.

    Creating XML tables

    First of all, it must be understood that XML is a very general language used to encode data having any structure. In particular, the tag hierarchy in an XML file describes a tree structure of the data. For instance, consider the file:

    It represents data having the structure:

    This structure seems at first view far from being tabular. However, modern database management systems, including MariaDB, implement something close to the relational model and work on tables that are structurally not hierarchical but tabular with rows and columns.

    Nevertheless, CONNECT can do it. Of course, it cannot guess what you want to extract from the XML structure, but gives you the possibility to specify it when you create the table[1].

    Let us take a first example. Suppose you want to make a table from the above document, displaying the node contents.

    For this, you can define a table xsamptag as:

    It are displayed as:

    AUTHOR
    TITLE
    TRANSLATOR
    PUBLISHER
    DATEPUB

    Jean-Christophe Bernadac

    Construire une application XML

    Eyrolles Paris

    1999

    William J. Pardi

    XML en Action

    James Guerin

    Microsoft Press Paris

    1999

    Let us try to understand what happened. By default the column names correspond to tag names. Because this file is rather simple, CONNECT was able to default the top tag of the table as the root node <BIBLIO> of the file, and the row tags as the <BOOK> children of the table tag. In a more complex file, this should have been specified, as we will see later. Note that we didn't have to worry about the sub-tags such as <FIRSTNAME> or <LASTNAME> because CONNECT automatically retrieves the entire text contained in a tag and its sub-tags[2].

    Only the first author of the first book appears. This is because only the first occurrence of a column tag has been retrieved so the result has a proper tabular structure. We will see later what we can do about that.

    How can we retrieve the values specified by attributes? By using a Coltype table option to specify the default column type. The value ‘@’ means that column names match attribute names. Therefore, we can retrieve them by creating a table such as:

    This table returns the following:

    ISBN
    LANG
    SUBJECT

    9782212090819

    fr

    applications

    9782840825685

    fr

    applications

    Now to define a table that will give us all the previous information, we must specify the column type for each column. Because in the next statement the column type defaults to Node, the field_format column parameter was used to indicate which columns are attributes:

    From Connect 1.7.0002

    Before Connect 1.7.0002

    Once done, we can enter the query:

    This will return the following result:

    SUBJECT
    LANG
    TITLE
    AUTHOR

    applications

    fr

    Construire une application XML

    Jean-Christophe Bernadac

    applications

    fr

    XML en Action

    William J. Pardi

    Note that we have been lucky. Because unlike SQL, XML is case sensitive and the column names have matched the node names only because the column names were given in upper case. Note also that the order of the columns in the table could have been different from the order in which the nodes appear in the XML file.

    Using Xpaths with XML tables

    Xpath is used by XML to locate and retrieve nodes. The table's main node Xpath is specified by the tabname option. If just the node name is given, CONNECT constructs an Xpath such as ‘BIBLIO’ in the example above that should retrieve the BIBLIO node wherever it is within the XML file.

    The row nodes are by default the children of the table node. However, for instance to eliminate some children nodes that are not real row nodes, the row node name can be specified using the rownode sub-option of the option_list option.

    The field_format options we used above can be specified to locate more precisely where and what information to retrieve using an Xpath-like syntax. For instance:

    From Connect 1.7.0002

    Before Connect 1.7.0002

    This very flexible column parameter serves several purposes:

    • To specify the tag name, or the attribute name if different from the column name.

    • To specify the type (tag or attribute) by a prefix of '@' for attributes.

    • To specify the path for sub-tags using the '/' character.

    This path is always relative to the current context (the column top node) and cannot be specified as an absolute path from the document root, therefore a leading '/' cannot be used. The path cannot be variable in node names or depth, therefore using '//' is not allowed.

    The query:

    replies:

    ISBN
    TITLE
    TRANSLATED
    TRANFN
    TRANLN
    LOCATION

    9782840825685

    XML en Action

    adapté de l'anglais par

    James

    Guerin

    Paris

    Libxml2 default name space issue

    An issue with libxml2 is that some files can declare a default name space in their root node. Because Xpath only searches in that name space, the nodes will not be found if they are not prefixed. If this happens, specify the tabname option as an Xpath ignoring the current name space:

    This must also be done for the default of specified Xpath of the not attribute columns. For instance:

    Note: This raises an error (and is useless anyway) with DOMDOC.

    Direct access on XML tables

    Direct access is available on XML tables. This means that XML tables can be sorted and used in joins, even in the one-side of the join.

    However, building a permanent index is not yet implemented. It is unclear whether this can be useful. Indeed, the DOM implementation that is used to access these tables firstly parses the whole file and constructs a node tree in memory. This may often be the longest part of the process, so the use of an index would not be of great value. Note also that this limits the XML files to a reasonable size. Anyway, when speed is important, this table type is not the best to use. Therefore, in these cases, it is probably better to convert the file to another type by inserting the XML table into another table of a more appropriate type for performance.

    Accessing tags with namespaces

    With the Windows DOMDOC support, this can be done using the prefix in the tabname column option and/or xpath column option. For instance, given the file gns.xml:

    and the defined CONNECT table:

    Displays:

    lon
    lat
    ele
    time

    -121,982223510742

    37,3884925842285

    6,6108512878418

    01/04/2014 14:54:05

    -121,982192993164

    37,3885803222656

    0

    01/04/2014 14:54:08

    -121,982162475586

    37,3886299133301

    0

    Only the prefixed ‘ele’ tag is recognized.

    However, this does not work with the libxml2 support. The solution is then to use a function ignoring the name space:

    Then :

    Displays:

    lon
    lat
    ele
    time

    -121,982223510742

    37,3884925842285

    6,6108512878418

    01/04/2014 14:54:05

    -121,982192993164

    37,3885803222656

    6.7878279685974

    01/04/2014 14:54:08

    -121,982162475586

    37,3886299133301

    6.7719874382019

    This time, all ‘ele` tags are recognized. This solution does not work with DOMDOC.

    Having Columns defined by Discovery

    It is possible to let the MariaDB discovery process do the job of column specification. When columns are not defined in the CREATE TABLE statement, CONNECT endeavours to analyze the XML file and to provide the column specifications. This is possible only for true XML tables, but not for HTML tables.

    For instance, the xsamp table could have been created specifying:

    Let’s check how it was actually specified using the SHOW CREATE TABLE statement:

    It is equivalent except for the column sizes that have been calculated from the file as the maximum length of the corresponding column when it was a normal value. Also, all columns are specified as type CHAR because XML does not provide information about the node content data type. Nullable is set to true if the column is missing in some rows.

    If a more complex definition is desired, you can ask CONNECT to analyse the XPATH up to a given level using the level option in the option list. The level value is the number of nodes that are taken in the XPATH. For instance:

    This will define the table as:

    From Connect 1.7.0002

    Then if we ask:

    Everything seems correct when we get the result:

    SUBJECT
    AUTHOR
    TITLE
    TRANSLATOR
    PUBLISHER

    applications

    Jean-Christophe Bernadac

    Construire une application XML

    Eyrolles Paris

    applications

    William J. Pardi

    XML en Action

    James Guerin

    Microsoft Press Paris

    However if we enter the apparently equivalent query on the xsampall table, based on the same file:

    this returns an apparently wrong answer:

    SUBJECT
    AUTHOR
    TITLE
    TRANSLATOR
    PUBLISHER

    applications

    Jean-Christophe Bernadac

    Construire une application XML

    Eyrolles Paris

    applications

    William J. Pardi

    XML en Action

    James Guerin

    Microsoft Press Paris

    What happened here? Simply, because we used the xsamp table to do the Insert, what has been inserted within the XML file had the structure described for xsamp:

    CONNECT cannot "invent" sub-tags that are not part of the xsamp table. Because these sub-tags do not exist, the xsampall table cannot retrieve the information that should be attached to them. If we want to be able to query the XML file by all the defined tables, the correct way to insert a new book to the file is to use the xsampall table, the only one that addresses all the components of the original document:

    Now the added book, in the XML file, will have the required structure:

    Note: We used a column list in the Insert statements when creating the table to avoid generating a <TRANSLATOR> node with sub-nodes, all containing null values (this works on Windows only).

    Multiple nodes in the XML document

    Let us come back to the above example XML file. We have seen that the author node can be "multiple" meaning that there can be more than one author of a book. What can we do to get the complete information fitting the relational model? CONNECT provides you with two possibilities, but is restricted to only one such multiple node per table.

    The first and most challenging one is to return as many rows than there are authors, the other columns being repeated as if we had make a join between the author column and the rest of the table. To achieve this, simply specify the “multiple” node name and the “expand” option when creating the table. For instance, we can create the xsamp2 table like this:

    In this statement, the Limit option specifies the maximum number of values that are expanded. If not specified, it defaults to 10. Any values above the limit are ignored and a warning message issued[3]. Now you can enter a query such as:

    This will retrieve and display the following result:

    ISBN
    SUBJECT
    AUTHOR
    TITLE

    9782212090819

    applications

    Jean-Christophe Bernadac

    Construire une application XML

    9782212090819

    applications

    François Knab

    Construire une application XML

    9782840825685

    applications

    William J. Pardi

    In this case, this is as if the table had four rows. However if we enter the query:

    this time the result are:

    ISBN
    SUBJECT
    TITLE
    PUBLISHER

    9782212090819

    applications

    Construire une application XML

    Eyrolles Paris

    9782840825685

    applications

    XML en Action

    Microsoft Press Paris

    9782212090529

    général

    XML, Langage et Applications

    Because the author column does not appear in the query, the corresponding row was not expanded. This is somewhat strange because this would have been different if we had been working on a table of a different type. However, it is closer to the relational model for which there should not be two identical rows (tuples) in a table. Nevertheless, you should be aware of this somewhat erratic behavior. For instance:

    This last query replies:

    ISBN
    SUBJECT
    TITLE
    PUBLISHER

    9782212090819

    applications

    Construire une application XML

    Eyrolles Paris

    9782212090819

    applications

    Construire une application XML

    Eyrolles Paris

    9782840825685

    applications

    XML en Action

    Even though the author column does not appear in the result, the corresponding row was expanded because the multiple column was used in the where clause.

    Intermediate multiple node

    The "multiple" node can be an intermediate node. If we want to do the same expanding with the xsampall table, there are nothing more to do. The_xsampall2_ table can be created with:

    From Connect 1.7.0002

    Before Connect 1.7.0002

    The only difference is that the "multiple" node is an intermediate node in the path. The resulting table can be seen with a query such as:

    This query displays:

    SUBJECT
    LANG
    TITLE
    FIRST
    LAST
    YEAR

    applications

    fr

    Construire une application XML

    Jean-Christophe

    Bernadac

    1999

    applications

    fr

    Construire une application XML

    These composite tables, half array half tree, reserve some surprises for us when updating, deleting from or inserting into them. Insert just cannot generate this structure; if two rows are inserted with just a different author, two book nodes are generated in the XML file. Delete always deletes one book node and all its children nodes even if specified against only one author. Update is more complicated:

    After these three updates, the first two responding "Affected rows: 1" and the last one responding "Affected rows: 2", the last query answers:

    subject
    lang
    title
    first
    last
    year

    applications

    fr

    Construire une application XML

    Jean-Christophe

    Mercier

    2002

    applications

    fr

    Construire une application XML

    What must be understood here is that the Update modifies node values in the XML file, not cell values in the relational table. The first update worked normally. The second update changed the year value of the book and this shows for the two expanded rows because there is only one DATEPUB node for that book. Because the third update applies to a row having a certain date value, both author names were updated.

    Making a List of Multiple Values

    Another way to see multiple values is to ask CONNECT to make a comma separated list of the multiple node values. This time, it can only be done if the "multiple" node is not intermediate. For example, we can modify the xsamp2 table definition by:

    This time 'Expand' is not specified, and Limit gives the maximum number of items in the list. Now if we enter the query:

    We will get the following result:

    ISBN
    SUBJECT
    AUTHOR(S)
    TITLE

    9782212090819

    applications

    Jean-Christophe Bernadac, François Knab

    Construire une application XML

    9782840825685

    applications

    William J. Pardi

    XML en Action

    9782212090529

    général

    Alain Michard

    Note that updating the "multiple" column is not possible because CONNECT does not know which of the nodes to update.

    This could not have been done with the xsampall2 table because the author node is intermediate in the path, and making two lists, one of first names and another one of last names would not make sense anyway.

    What if a table contains several multiple nodes

    This can be handled by creating several tables on the same file, each containing only one multiple node and constructing the desired result using joins.

    Support of HTML Tables

    Most tables included in HTML documents cannot be processed by CONNECT because the HTML language is often not compatible with the syntax of XML. In particular, XML requires all open tags to be matched by a closing tag while it is sometimes optional in HTML. This is often the case concerning column tags.

    However, you can meet tables that respect the XML syntax but have some of the features of HTML tables. For instance:

    Here the different column tags are included in <td></td> tags as for HTML tables. You cannot just add this tag in the Xpath of the columns, because the search is done on the first occurrence of each tag, and this would cause this search to fail for all columns except the first one. This case is handled by specifying the Colnode table option that gives the name of these column tags, for example:

    From Connect 1.7.0002

    Before Connect 1.7.0002

    The table are displayed as:

    Name
    Origin
    Description

    Huntsman

    Bath, UK

    Wonderful hop, light alcohol

    Tuborg

    Danmark

    In small bottles

    However, you can deal with tables even closer to the HTML model. For example the coffee.htm file:

    Here column values are directly represented by the TD tag text. You cannot declare them as tags nor as attributes. In addition, they are not located using their name but by their position within the row. Here is how to declare such a table to CONNECT:

    You specify the fact that columns are located by position by setting the_Coltype_ option to 'HTML'. Each column position (0 based) are the value of the flag column parameter that is set by default in sequence. Now we are able to display the table:

    Name
    Cups
    Type
    Sugar

    T. Sexton

    10

    Espresso

    No

    J. Dinnen

    5

    Decaf

    Yes

    Note 1: We specified 'header=n' in the create statement to indicate that the first n rows of the table are not data rows and should be skipped.

    Note 2: In this last example, we did not specify the node names using the Rownode and Colnode options because when Coltype is set to 'HTML' they default to 'Rownode=TR' and 'Colnode=TD'.

    Note 3: The Coltype option is a word only the first character of which is significant. Recognized values are:

    T(ag) or N(ode)

    Column names match a tag name (the default).

    A(ttribute) or @

    Column names match an attribute name.

    H(tml) or C(ol) or P(os)

    Column are retrieved by their position.

    New file setting

    Some create options are used only when creating a table on a new file, i. e. when inserting into a file that does not exist yet. When specified, the 'Header' option will create a header row with the name of the table columns. This is chiefly useful for HTML tables to be displayed on a web browser.

    Some new list-options are used in this context:

    Encoding

    The encoding of the new document, defaulting to UTF-8.

    Attribute

    A list of 'attname=attvalue' separated by ';' to add to the table node.

    HeadAttr

    An attribute list to be added to the header row node.

    Let us see for instance, the following create statement:

    Supposing the table file does not exist yet, the first insert into that table, for instance by the following statement:

    will generate the following file:

    This file can be used to display the table on a web browser (encoding should beISO-8859-x)

    handler
    version
    author
    description
    maturity

    Maria

    1.5

    Monty Program Ab

    Compatibility aliases for the Aria engine

    Gamma

    Note: The XML document encoding is generally specified in the XML header node and can be different from the DATA_CHARSET, which is always UTF-8 for XML tables. Therefore the table DATA_CHARSET character set should be unspecified, or specified as UTF8. The Encoding specification is useful only for new XML files and ignored for existing files having their encoding already specified in the header node.

    Notes

    1. ↑ CONNECT does not claim to be able to deal with any XML document. Besides, those that can usefully be processed for data analysis are likely to have a structure that can easily be transformed into a table.

    2. ↑ With libxml2, sub tags text can be separated by 0 or several blanks depending on the structure and indentation of the data file.

    3. ↑ This may cause some rows to be lost because an eventual where clause on the “multiple” column is applied only on the limited number of retrieved rows.

    This page is licensed: CC BY-SA / Gnu FDL

    <?xml version="1.0" encoding="ISO-8859-1"?>
    <BIBLIO SUBJECT="XML">
       <BOOK ISBN="9782212090819" LANG="fr" SUBJECT="applications">
          <AUTHOR>
             <FIRSTNAME>Jean-Christophe</FIRSTNAME>
             <LASTNAME>Bernadac</LASTNAME>
          </AUTHOR>
          <AUTHOR>
             <FIRSTNAME>François</FIRSTNAME>
             <LASTNAME>Knab</LASTNAME>
          </AUTHOR>
          <TITLE>Construire une application XML</TITLE>
          <PUBLISHER>
             <NAME>Eyrolles</NAME>
             <PLACE>Paris</PLACE>
          </PUBLISHER>
          <DATEPUB>1999</DATEPUB>
       </BOOK>
       <BOOK ISBN="9782840825685" LANG="fr" SUBJECT="applications">
          <AUTHOR>
             <FIRSTNAME>William J.</FIRSTNAME>
             <LASTNAME>Pardi</LASTNAME>
          </AUTHOR>
          <TRANSLATOR PREFIX="adapté de l'anglais par">
             <FIRSTNAME>James</FIRSTNAME>
             <LASTNAME>Guerin</LASTNAME>
          </TRANSLATOR>
          <TITLE>XML en Action</TITLE>
          <PUBLISHER>
             <NAME>Microsoft Press</NAME>
             <PLACE>Paris</PLACE>
          </PUBLISHER>
          <DATEPUB>1999</DATEPUB>
       </BOOK>
    </BIBLIO>
    <BIBLIO>
                            __________|_________
                           |                    |
                <BOOK:ISBN,LANG,SUBJECT>        |
             ______________|_______________     |
            |        |         |           |    |
         <AUTHOR> <TITLE> <PUBLISHER> <DATEPUB> |
        ____|____            ___|____           |
       |    |    |          |        |          |
    <FIRST> | <LAST>     <NAME>   <PLACE>       |
            |                                   |
         <AUTHOR>                   <BOOK:ISBN,LANG,SUBJECT>
        ____|____         ______________________|__________________
       |         |       |            |         |        |         |
    <FIRST>   <LAST>  <AUTHOR> <TRANSLATOR> <TITLE> <PUBLISHER> <DATEPUB>
                    _____|_        ___|___            ___|____
                   |       |      |       |          |        |
                <FIRST> <LAST> <FIRST> <LAST>     <NAME>   <PLACE>
    CREATE TABLE xsamptag (
      AUTHOR CHAR(50),
      TITLE CHAR(32),
      TRANSLATOR CHAR(40),
      PUBLISHER CHAR(40),
      DATEPUB INT(4))
    ENGINE=CONNECT table_type=XML file_name='Xsample.xml';
    CREATE TABLE xsampattr (
      ISBN CHAR(15),
      LANG CHAR(2),
      SUBJECT CHAR(32))
    ENGINE=CONNECT table_type=XML file_name='Xsample.xml'
    option_list='Coltype=@';
    CREATE TABLE xsamp (
    ISBN CHAR(15) xpath='@',
    LANG CHAR(2) xpath='@',
    SUBJECT CHAR(32) xpath='@',
    AUTHOR CHAR(50),
    TITLE CHAR(32),
    TRANSLATOR CHAR(40),
    PUBLISHER CHAR(40),
    DATEPUB INT(4))
    ENGINE=CONNECT table_type=XML file_name='Xsample.xml'
    tabname='BIBLIO' option_list='rownode=BOOK';
    CREATE TABLE xsamp (
      ISBN CHAR(15) field_format='@',
      LANG CHAR(2) field_format='@',
      SUBJECT CHAR(32) field_format='@',
      AUTHOR CHAR(50),
      TITLE CHAR(32),
      TRANSLATOR CHAR(40),
      PUBLISHER CHAR(40),
      DATEPUB INT(4))
    ENGINE=CONNECT table_type=XML file_name='Xsample.xml'
    tabname='BIBLIO' option_list='rownode=BOOK';
    SELECT subject, lang, title, author FROM xsamp;
    CREATE TABLE xsampall (
    isbn CHAR(15) xpath='@ISBN',
    LANGUAGE CHAR(2) xpath='@LANG',
    subject CHAR(32) xpath='@SUBJECT',
    authorfn CHAR(20) xpath='AUTHOR/FIRSTNAME',
    authorln CHAR(20) xpath='AUTHOR/LASTNAME',
    title CHAR(32) xpath='TITLE',
    translated CHAR(32) xpath='TRANSLATOR/@PREFIX',
    tranfn CHAR(20) xpath='TRANSLATOR/FIRSTNAME',
    tranln CHAR(20) xpath='TRANSLATOR/LASTNAME',
    publisher CHAR(20) xpath='PUBLISHER/NAME',
    LOCATION CHAR(20) xpath='PUBLISHER/PLACE',
    YEAR INT(4) xpath='DATEPUB')
    ENGINE=CONNECT table_type=XML file_name='Xsample.xml'
    tabname='BIBLIO' option_list='rownode=BOOK';
    CREATE TABLE xsampall (
      isbn CHAR(15) field_format='@ISBN',
      LANGUAGE CHAR(2) field_format='@LANG',
      subject CHAR(32) field_format='@SUBJECT',
      authorfn CHAR(20) field_format='AUTHOR/FIRSTNAME',
      authorln CHAR(20) field_format='AUTHOR/LASTNAME',
      title CHAR(32) field_format='TITLE',
      translated CHAR(32) field_format='TRANSLATOR/@PREFIX',
      tranfn CHAR(20) field_format='TRANSLATOR/FIRSTNAME',
      tranln CHAR(20) field_format='TRANSLATOR/LASTNAME',
      publisher CHAR(20) field_format='PUBLISHER/NAME',
      LOCATION CHAR(20) field_format='PUBLISHER/PLACE',
      YEAR INT(4) field_format='DATEPUB')
    ENGINE=CONNECT table_type=XML file_name='Xsample.xml'
    tabname='BIBLIO' option_list='rownode=BOOK';
    SELECT isbn, title, translated, tranfn, tranln, LOCATION FROM
        xsampall WHERE translated IS NOT NULL;
    TABNAME="//*[local-name()='BIBLIO']"
    title char(32) field_format="*[local-name()='TITLE']",
    <?xml version="1.0" encoding="UTF-8"?>
    <gpx xmlns:gns="http:dummy">
    <gns:trkseg>
    <trkpt lon="-121.9822235107421875" lat="37.3884925842285156">
    <gns:ele>6.610851287841797</gns:ele>
    <time>2014-04-01T14:54:05.000Z</time>
    </trkpt>
    <trkpt lon="-121.9821929931640625" lat="37.3885803222656250">
    <ele>6.787827968597412</ele>
    <time>2014-04-01T14:54:08.000Z</time>
    </trkpt>
    <trkpt lon="-121.9821624755859375" lat="37.3886299133300781">
    <ele>6.771987438201904</ele>
    <time>2014-04-01T14:54:10.000Z</time>
    </trkpt>
    </gns:trkseg>
    </gpx>
    CREATE TABLE xgns (
    `lon` DOUBLE(21,16) NOT NULL `xpath`='@',
    `lat` DOUBLE(20,16) NOT NULL `xpath`='@',
    `ele` DOUBLE(21,16) NOT NULL `xpath`='gns:ele',
    `time` DATETIME date_format="YYYY-MM-DD 'T' hh:mm:ss '.000Z'"
    ) 
      ENGINE=CONNECT DEFAULT CHARSET=latin1 `table_type`=XML 
      `file_name`='gns.xml' tabname='gns:trkseg' option_list='xmlsup=domdoc';
    SELECT * FROM xgns;
    CREATE TABLE xgns2 (
    `lon` DOUBLE(21,16) NOT NULL `xpath`='@',
    `lat` DOUBLE(20,16) NOT NULL `xpath`='@',
    `ele` DOUBLE(21,16) NOT NULL `xpath`="*[local-name()='ele']",
    `time` DATETIME date_format="YYYY-MM-DD 'T' hh:mm:ss '.000Z'"
    ) 
      ENGINE=CONNECT DEFAULT CHARSET=latin1 `table_type`=XML 
      `file_name`='gns.xml' tabname="*[local-name()='trkseg']" option_list='xmlsup=libxml2';
    SELECT * FROM xgns2;
    CREATE TABLE xsamp
    ENGINE=CONNECT table_type=XML file_name='Xsample.xml'
    tabname='BIBLIO' option_list='rownode=BOOK';
    CREATE TABLE `xsamp` (
      `ISBN` CHAR(13) NOT NULL `FIELD_FORMAT`='@',
      `LANG` CHAR(2) NOT NULL `FIELD_FORMAT`='@',
      `SUBJECT` CHAR(12) NOT NULL `FIELD_FORMAT`='@',
      `AUTHOR` CHAR(24) NOT NULL,
      `TRANSLATOR` CHAR(12) DEFAULT NULL,
      `TITLE` CHAR(30) NOT NULL,
      `PUBLISHER` CHAR(21) NOT NULL,
      `DATEPUB` CHAR(4) NOT NULL
    ) ENGINE=CONNECT DEFAULT CHARSET=latin1 `TABLE_TYPE`='XML' 
      `FILE_NAME`='E:/Data/Xml/Xsample.xml' `TABNAME`='BIBLIO' `OPTION_LIST`='rownode=BOOK';
    CREATE TABLE xsampall
    ENGINE=CONNECT table_type=XML file_name='Xsample.xml'
    tabname='BIBLIO' option_list='rownode=BOOK,Level=1';
    CREATE TABLE `xsampall` (
      `ISBN` CHAR(13) NOT NULL `XPATH`='@',
      `LANG` CHAR(2) NOT NULL `XPATH`='@',
      `SUBJECT` CHAR(12) NOT NULL `XPATH`='@',
      `AUTHOR_FIRSTNAME` CHAR(15) NOT NULL `XPATH`='AUTHOR/FIRSTNAME',
      `AUTHOR_LASTNAME` CHAR(8) NOT NULL `XPATH`='AUTHOR/LASTNAME',
      `TRANSLATOR_PREFIX` CHAR(24) DEFAULT NULL `XPATH`='TRANSLATOR/@PREFIX',
      `TRANSLATOR_FIRSTNAME` CHAR(7) DEFAULT NULL `XPATH`='TRANSLATOR/FIRSTNAME',
      `TRANSLATOR_LASTNAME` CHAR(6) DEFAULT NULL `XPATH`='TRANSLATOR/LASTNAME',
      `TITLE` CHAR(30) NOT NULL,
      `PUBLISHER_NAME` CHAR(15) NOT NULL `XPATH`='PUBLISHER/NAME',
      `PUBLISHER_PLACE` CHAR(5) NOT NULL `XPATH`='PUBLISHER/PLACE',
      `DATEPUB` CHAR(4) NOT NULL
    ) ENGINE=CONNECT DEFAULT CHARSET=latin1 `TABLE_TYPE`='XML' `FILE_NAME`='Xsample.xml' `TABNAME`='BIBLIO' `OPTION_LIST`='rownode=BOOK,Depth=1';
    <</SQL>>
    
    
    BEFORE CONNECT 1.7.0002
    <<SQL>>
    CREATE TABLE `xsampall` (
      `ISBN` CHAR(13) NOT NULL `FIELD_FORMAT`='@',
      `LANG` CHAR(2) NOT NULL `FIELD_FORMAT`='@',
      `SUBJECT` CHAR(12) NOT NULL `FIELD_FORMAT`='@',
      `AUTHOR_FIRSTNAME` CHAR(15) NOT NULL `FIELD_FORMAT`='AUTHOR/FIRSTNAME',
      `AUTHOR_LASTNAME` CHAR(8) NOT NULL `FIELD_FORMAT`='AUTHOR/LASTNAME',
      `TRANSLATOR_PREFIX` CHAR(24) DEFAULT NULL `FIELD_FORMAT`='TRANSLATOR/@PREFIX',
      `TRANSLATOR_FIRSTNAME` CHAR(7) DEFAULT NULL `FIELD_FORMAT`='TRANSLATOR/FIRSTNAME',
      `TRANSLATOR_LASTNAME` CHAR(6) DEFAULT NULL `FIELD_FORMAT`='TRANSLATOR/LASTNAME',
      `TITLE` CHAR(30) NOT NULL,
      `PUBLISHER_NAME` CHAR(15) NOT NULL `FIELD_FORMAT`='PUBLISHER/NAME',
      `PUBLISHER_PLACE` CHAR(5) NOT NULL `FIELD_FORMAT`='PUBLISHER/PLACE',
      `DATEPUB` CHAR(4) NOT NULL
    ) ENGINE=CONNECT DEFAULT CHARSET=latin1 `TABLE_TYPE`='XML' `FILE_NAME`='Xsample.xml' 
      `TABNAME`='BIBLIO' `OPTION_LIST`='rownode=BOOK,Level=1';
    <</SQL>>
    
    This METHOD can be used AS a quick way TO make a “TEMPLATE” TABLE definition that can later be edited TO make the desired definition. IN particular, COLUMN NAMES ARE constructed FROM ALL the nodes OF their PATH IN ORDER TO have DISTINCT COLUMN names. This can be manually edited TO have the desired NAMES, provided their XPATH IS NOT modified.
    
    TO have a preview OF how columns are DEFINED, you can USE a CATALOG TABLE like this:
    
    <<SQL>>
    CREATE TABLE xsacol
    ENGINE=CONNECT table_type=XML file_name='Xsample.xml'
    tabname='BIBLIO' option_list='rownode=BOOK,Level=1' catfunc=col;
    <</SQL>>
    
    AND WHEN asking:
    
    <<SQL>>
    SELECT COLUMN_NAME Name, type_name TYPE, column_size SIZE, NULLABLE, xpath FROM xsacol;
    <</SQL>>
    
    You GET the description OF what the TABLE columns are:
    
    <<style CLASS="darkheader-nospace-borders">>
    |= Name |= TYPE |= SIZE |= NULLABLE |= xpath |
    | ISBN | CHAR | 13 | 0 | @ | 
    | LANG | CHAR | 2 | 0 | @ |
    | SUBJECT | CHAR | 12 | 0 | @ |
    | AUTHOR_FIRSTNAME | CHAR | 15 | 0 | AUTHOR/FIRSTNAME | 
    | AUTHOR_LASTNAME | CHAR | 8 | 0 | AUTHOR/LASTNAME | 
    | TRANSLATOR_PREFIX | CHAR | 24 | 1 | TRANSLATOR/@PREFIX | 
    | TRANSLATOR_FIRSTNAME | CHAR | 7 | 1 | TRANSLATOR/FIRSTNAME |
    | TRANSLATOR_LASTNAME | CHAR | 6 | 1 | TRANSLATOR/LASTNAME | 
    | TITLE | CHAR | 30 | 0 |  | 
    | PUBLISHER_NAME | CHAR | 15 | 0 | PUBLISHER/NAME |
    | PUBLISHER_PLACE | CHAR | 5 | 0 | PUBLISHER/PLACE |
    | DATEPUB | CHAR | 4 | 0 | |
    <</style>>
    
    == WRITE operations ON XML TABLES
    You can freely USE the UPDATE, DELETE AND INSERT commands WITH XML tables.
    However, you must understand that the format OF the updated OR inserted DATA
    follows the specifications OF the TABLE you created, NOT the ones OF the
    original SOURCE file. FOR INSTANCE, let us suppose we INSERT a NEW book USING
    the //xsamp// TABLE (NOT the //xsampall// TABLE) WITH the command:
    
    <<code lang=mysql INLINE=FALSE>>
    INSERT INTO xsamp
      (isbn, lang, subject, author, title, publisher,datepub)
      VALUES ('9782212090529','fr','général','Alain Michard',
             'XML, Langage et Applications','Eyrolles Paris',1998);
    SELECT subject, author, title, translator, publisher FROM xsamp;
    SELECT subject,
    concat(authorfn, ' ', authorln) author , title,
    concat(tranfn, ' ', tranln) translator,
    concat(publisher, ' ', LOCATION) publisher FROM xsampall;
    <BOOK ISBN="9782212090529" LANG="fr" SUBJECT="général">
          <AUTHOR>Alain Michard</AUTHOR>
          <TITLE>XML, Langage et Applications</TITLE>
          <TRANSLATOR></TRANSLATOR>
          <PUBLISHER>Eyrolles Paris</PUBLISHER>
          <DATEPUB>1998</DATEPUB>
       </BOOK>
    DELETE FROM xsamp WHERE isbn = '9782212090529';
    
    INSERT INTO xsampall (isbn, LANGUAGE, subject, authorfn, authorln,
          title, publisher, LOCATION, YEAR)
       VALUES('9782212090529','fr','général','Alain','Michard',
          'XML, Langage et Applications','Eyrolles','Paris',1998);
    <BOOK ISBN="9782212090529" LANG="fr" SUBJECT="général">
          <AUTHOR>
             <FIRSTNAME>Alain</FIRSTNAME>
             <LASTNAME>Michard</LASTNAME>
          </AUTHOR>
          <TITLE>XML, Langage et Applications</TITLE>
          <PUBLISHER>
             <NAME>Eyrolles</NAME>
             <PLACE>Paris</PLACE>
          </PUBLISHER>
          <DATEPUB>1998</DATEPUB>
       </BOOK>
    CREATE TABLE xsamp2 (
      ISBN CHAR(15) field_format='@',
      LANG CHAR(2) field_format='@',
      SUBJECT CHAR(32) field_format='@',
      AUTHOR CHAR(40),
      TITLE CHAR(32),
      TRANSLATOR CHAR(32),
      PUBLISHER CHAR(32),
      DATEPUB INT(4))
    ENGINE=CONNECT table_type=XML file_name='Xsample.xml'
    tabname='BIBLIO'
    option_list='rownode=BOOK,Expand=1,Mulnode=AUTHOR,Limit=2';
    SELECT isbn, subject, author, title FROM xsamp2;
    SELECT isbn, subject, title, publisher FROM xsamp2;
    SELECT COUNT(*) FROM xsamp2;                /* Replies 3 */
    SELECT COUNT(author) FROM xsamp2;           /* Replies 4 */
    SELECT COUNT(isbn) FROM xsamp2;             /* Replies 3 */
    SELECT isbn, subject, title, publisher FROM xsamp2 WHERE author <> '';
    CREATE TABLE xsampall2 (
    isbn CHAR(15) xpath='@ISBN',
    LANGUAGE CHAR(2) xpath='@LANG',
    subject CHAR(32) xpath='@SUBJECT',
    authorfn CHAR(20) xpath='AUTHOR/FIRSTNAME',
    authorln CHAR(20) xpath='AUTHOR/LASTNAME',
    title CHAR(32) xpath='TITLE',
    translated CHAR(32) xpath='TRANSLATOR/@PREFIX',
    tranfn CHAR(20) xpath='TRANSLATOR/FIRSTNAME',
    tranln CHAR(20) xpath='TRANSLATOR/LASTNAME',
    publisher CHAR(20) xpath='PUBLISHER/NAME',
    LOCATION CHAR(20) xpath='PUBLISHER/PLACE',
    YEAR INT(4) xpath='DATEPUB')
    ENGINE=CONNECT table_type=XML file_name='Xsample.xml'
    tabname='BIBLIO' option_list='rownode=BOOK,Expand=1,Mulnode=AUTHOR,Limit=2';
    CREATE TABLE xsampall2 (
      isbn CHAR(15) field_format='@ISBN',
      LANGUAGE CHAR(2) field_format='@LANG',
      subject CHAR(32) field_format='@SUBJECT',
      authorfn CHAR(20) field_format='AUTHOR/FIRSTNAME',
      authorln CHAR(20) field_format='AUTHOR/LASTNAME',
      title CHAR(32) field_format='TITLE',
      translated CHAR(32) field_format='TRANSLATOR/@PREFIX',
      tranfn CHAR(20) field_format='TRANSLATOR/FIRSTNAME',
      tranln CHAR(20) field_format='TRANSLATOR/LASTNAME',
      publisher CHAR(20) field_format='PUBLISHER/NAME',
      LOCATION CHAR(20) field_format='PUBLISHER/PLACE',
      YEAR INT(4) field_format='DATEPUB')
    ENGINE=CONNECT table_type=XML file_name='Xsample.xml'
    tabname='BIBLIO'
    option_list='rownode=BOOK,Expand=1,Mulnode=AUTHOR,Limit=2';
    SELECT subject, LANGUAGE lang, title, authorfn FIRST, authorln
        LAST, YEAR FROM xsampall2;
    UPDATE xsampall2 SET authorfn = 'Simon' WHERE authorln = 'Knab';
    UPDATE xsampall2 SET YEAR = 2002 WHERE authorln = 'Bernadac';
    UPDATE xsampall2 SET authorln = 'Mercier' WHERE YEAR = 2002;
    ALTER TABLE xsamp2 option_list='rownode=BOOK,Mulnode=AUTHOR,Limit=3';
    SELECT isbn, subject, author "AUTHOR(S)", title FROM xsamp2;
    <?xml version="1.0"?>
    <Beers>
      <table>
        <th><td>Name</td><td>Origin</td><td>Description</td></th>
        <tr>
          <td><brandName>Huntsman</brandName></td>
          <td><origin>Bath, UK</origin></td>
          <td><details>Wonderful hop, light alcohol</details></td>
        </tr>
        <tr>
          <td><brandName>Tuborg</brandName></td>
          <td><origin>Danmark</origin></td>
          <td><details>In small bottles</details></td>
        </tr>
      </table>
    </Beers>
    CREATE TABLE beers (
    `Name` CHAR(16) xpath='brandName',
    `Origin` CHAR(16) xpath='origin',
    `Description` CHAR(32) xpath='details')
    ENGINE=CONNECT table_type=XML file_name='beers.xml'
    tabname='table' option_list='rownode=tr,colnode=td';
    CREATE TABLE beers (
      `Name` CHAR(16) field_format='brandName',
      `Origin` CHAR(16) field_format='origin',
      `Description` CHAR(32) field_format='details')
    ENGINE=CONNECT table_type=XML file_name='beers.xml'
    tabname='table' option_list='rownode=tr,colnode=td';
    <TABLE summary="This table charts the number of cups of coffe
                    consumed by each senator, the type of coffee (decaf
                    or regular), and whether taken with sugar.">
      <CAPTION>Cups of coffee consumed by each senator</CAPTION>
      <TR>
        <TH>Name</TH>
        <TH>Cups</TH>
        <TH>Type of Coffee</TH>
        <TH>Sugar?</TH>
      </TR>
      <TR>
        <TD>T. Sexton</TD>
        <TD>10</TD>
        <TD>Espresso</TD>
        <TD>No</TD>
      </TR>
      <TR>
        <TD>J. Dinnen</TD>
        <TD>5</TD>
        <TD>Decaf</TD>
        <TD>Yes</TD>
      </TR>
    </TABLE>
    CREATE TABLE coffee (
      `Name` CHAR(16),
      `Cups` INT(8),
      `Type` CHAR(16),
      `Sugar` CHAR(4))
    ENGINE=CONNECT table_type=XML file_name='coffee.htm'
    tabname='TABLE' header=1 option_list='Coltype=HTML';
    CREATE TABLE handlers (
      handler CHAR(64),
      VERSION CHAR(20),
      author CHAR(64),
      description CHAR(255),
      maturity CHAR(12))
    ENGINE=CONNECT table_type=XML file_name='handlers.htm'
    tabname='TABLE' header=yes
    option_list='coltype=HTML,encoding=ISO-8859-1,
    attribute=border=1;cellpadding=5,headattr=bgcolor=yellow';
    INSERT INTO handlers SELECT plugin_name, plugin_version,
      plugin_author, plugin_description, plugin_maturity FROM
      information_schema.plugins WHERE plugin_type = 'DAEMON';
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <!-- Created by CONNECT Version 3.05.0005 August 17, 2012 -->
    <TABLE border="1" cellpadding="5">
      <TR bgcolor="yellow">
        <TH>handler</TH>
        <TH>version</TH>
        <TH>author</TH>
        <TH>description</TH>
        <TH>maturity</TH>
      </TR>
      <TR>
        <TD>Maria</TD>
        <TD>1.5</TD>
        <TD>Monty Program Ab</TD>
        <TD>Compatibility aliases for the Aria engine</TD>
        <TD>Gamma</TD>
      </TR>
    </TABLE>

    01/04/2014 14:54:10

    01/04/2014 14:54:10

    général

    Alain Michard

    XML, Langage et Applications

    Eyrolles Paris

    général

    XML, Langage et Applications

    XML en Action

    9782212090529

    général

    Alain Michard

    XML, Langage et Applications

    Eyrolles Paris

    Microsoft Press Paris

    9782212090529

    général

    XML, Langage et Applications

    Eyrolles Paris

    François

    Knab

    1999

    applications

    fr

    XML en Action

    William J.

    Pardi

    1999

    général

    fr

    XML, Langage et Applications

    Alain

    Michard

    1998

    François

    Knab

    2002

    applications

    fr

    XML en Action

    William J.

    Pardi

    1999

    général

    fr

    XML, Langage et Applications

    Alain

    Michard

    1998

    XML, Langage et Applications

    CONNECT JSON Table Type

    The CONNECT storage engine has been deprecated.

    This storage engine has been deprecated.

    Overview

    JSON (JavaScript Object Notation) is a lightweight data-interchange format widely used on the Internet. Many applications, generally written in JavaScript or PHP use and produce JSON data, which are exchanged as files of different physical formats. JSON data is often returned from REST queries.

    It is also possible to query, create or update such information in a database-like manner. MongoDB does it using a JavaScript-like language. PostgreSQL includes these facilities by using a specific data type and related functions like dynamic columns.

    The CONNECT engine adds this facility to MariaDB by supporting tables based on JSON data files. This is done like for XML tables by creating tables describing what should be retrieved from the file and how it should be processed.

    Starting with 1.07.0002, the internal way JSON was parsed and handled was changed. The main advantage of the new way is to reduce the memory required to parse JSON. It was from 6 to 10 times the size of the JSON source and is now only 2 to 4 times. However, this is in Beta mode and JSON tables are still handled using the old mode. To use the new mode, tables should be created with TABLE_TYPE=BSON. Another way is the set the session variable to 1 or ON. Then all JSON tables are handled as BSON. Of course, this is temporary and when successfully tested, the new way will replace the old way and all tables be created as JSON.

    Let us start from the file “biblio3.json” that is the JSON equivalent of the XML Xsample file described in the XML table chapter:

    This file contains the different items existing in JSON.

    • Arrays: They are enclosed in square brackets and contain a list of comma separated values.

    • Objects: They are enclosed in curly brackets. They contain a comma separated list of pairs, each pair composed of a key name between double quotes, followed by a ‘:’ character and followed by a value.

    • Values: Values can be an array or an object. They also can be a string between double quotes, an integer or float number, a Boolean value or a null value. The simplest way for CONNECT to locate a table in such a file is by an array containing a list of objects (this is what MongoDB calls a collection of documents). Each array value are a table row and each pair of the row objects will represent a column, the key being the column name and the value the column value.

    A first try to create a table on this file are to take the outer array as the table:

    If we execute the query:

    We get the result:

    isbn
    author
    title
    publisher

    Note that by default, column values that are objects have been set to the concatenation of all the string values of the object separated by a blank. When a column value is an array, only the first item of the array is retrieved (This will change in later versions of Connect).

    However, things are generally more complicated. If JSON files do not contain attributes (although object pairs are similar to attributes) they contain a new item, arrays. We have seen that they can be used like XML multiple nodes, here to specify several authors, but they are more general because they can contain objects of different types, even it may not be advisable to do so.

    This is why CONNECT enables the specification of a column field_format option “JPATH” (FIELD_FORMAT until Connect 1.6) that is used to describe exactly where the items to display are and how to handles arrays.

    Here is an example of a new table that can be created on the same file, allowing choosing the column names, to get some sub-objects and to specify how to handle the author array.

    Until Connect 1.5:

    From Connect 1.6:

    From Connect 1.07.0002

    Given the query:

    The result is:

    title
    author
    publisher
    location

    Note: The JPATH was not specified for column ISBN because it defaults to the column name.

    Here is another example showing that one can choose what to extract from the file and how to “expand” an array, meaning to generate one row for each array value:

    Until Connect 1.5:

    From Connect 1.6:

    From Connect 1.06.006:

    From Connect 1.07.0002

    It is displayed as:

    ISBN
    Title
    AuthorFN
    AuthorLN
    Year

    Note: The example above shows that the ‘$.’, that means the beginning of the path, can be omitted.

    The Jpath Specification

    From Connect 1.6, the Jpath specification has changed to be the one of the native JSON functions and more compatible with what is generally used. It is close to the standard definition and compatible to what MongoDB and other products do. The ‘:’ separator is replaced by ‘.’. Position in array is accepted MongoDB style with no square brackets. Array specification specific to CONNECT are still accepted but [*] is used for expanding and [x] for multiply. However, tables created with the previous syntax can still be used by adding SEP_CHAR=’:’ (can be done with alter table). Also, it can be now specified as JPATH (was FIELD_FORMAT) but FIELD_FORMAT is still accepted.

    Until Connect 1.5, it is the description of the path to follow to reach the required item. Each step is the key name (case sensitive) of the pair when crossing an object, and the number of the value between square brackets when crossing an array. Each specification is separated by a ‘:’ character.

    From Connect 1.6, It is the description of the path to follow to reach the required item. Each step is the key name (case sensitive) of the pair when crossing an object, and the position number of the value when crossing an array. Key specifications are separated by a ‘.’ character.

    For instance, in the above file, the last name of the second author of a book is reached by:

    $.AUTHOR[1].LASTNAME standard style &#xNAN;$AUTHOR.1.LASTNAME MongoDB style AUTHOR:[1]:LASTNAME old style when SEP_CHAR=’:’ or until Connect 1.5

    The ‘$’ or “$.” prefix specifies the root of the path and can be omitted with CONNECT.

    The array specification can also indicate how it must be processed:

    For instance, in the above file, the last name of the second author of a book is reached by:

    The array specification can also indicate how it must be processed:

    Specification
    Array Type
    Limit
    Description

    Note 1: When the LIMIT restriction is applicable, only the first m array items are used, m being the value of the LIMIT option (to be specified in option_list). The LIMIT default value is 10.

    Note 2: An alternative way to indicate what is to be expanded is to use the expand option in the option list, for instance:

    AUTHOR is here the key of the pair that has the array as a value (case sensitive). Expand is limited to only one branch (expanded arrays must be under the same object).

    Let us take as an example the file expense.json (). The table jexpall expands all under and including the week array:

    From Connect 1.07.0002

    From Connect.1.6

    Until Connect 1.5:

    WHO
    WEEK
    WHAT
    AMOUNT

    The table jexpw shows what was bought and the sum and average of amounts for each person and week:

    From Connect 1.07.0002

    From Connect 1.6:

    Until Connect 1.5:

    WHO
    WEEK
    WHAT
    SUM
    AVERAGE

    Let us see what the table jexpz does:

    From Connect 1.6:

    From Connect 1.07.0002

    Until Connect 1.5:

    WHO
    WEEKS
    SUMS
    SUM
    AVGS
    SUMAVG
    AVGSUM
    AVERAGE

    For all persons:

    • Column 1 show the person name.

    • Column 2 shows the weeks for which values are calculated.

    • Column 3 lists the sums of expenses for each week.

    • Column 4 calculates the sum of all expenses by person.

    It would be very difficult, if even possible, to obtain this result from table jexpall using an SQL query.

    Handling of NULL Values

    Json has a null explicit value that can be met in arrays or object key values. When regarding json as a relational table, a column value can be null because the corresponding json item is explicitly null, or implicitly because the corresponding item is missing in an array or object. CONNECT does not make any distinction between explicit and implicit nulls.

    However, it is possible to specify how nulls are handled and represented. This is done by setting the string session variable . The default value of connect_json_null is “”; it can be changed, for instance, by:

    This changes its representation when a column displays the text of an object or the concatenation of the values of an array.

    It is also possible to tell CONNECT to ignore nulls by:

    When doing so, nulls do not appear in object text or array lists. However, this does not change the behavior of array calculation nor the result of array count.

    Having Columns defined by Discovery

    It is possible to let the MariaDB discovery process do the job of column specification. When columns are not defined in the create table statement, CONNECT endeavors to analyze the JSON file and to provide the column specifications. This is possible only for tables represented by an array of objects because CONNECT retrieves the column names from the object pair keys and their definition from the object pair values. For instance, the jsample table could be created saying:

    Let’s check how it was actually specified using the show create table statement:

    It is equivalent except for the column sizes that have been calculated from the file as the maximum length of the corresponding column when it was a normal value. For columns that are json arrays or objects, the column is specified as a varchar string of length 256, supposedly big enough to contain the sub-object's concatenated values. Nullable is set to true if the column is null or missing in some rows or if its JPATH contains arrays.

    If a more complex definition is desired, you can ask CONNECT to analyse the JPATH up to a given depth using the DEPTH or LEVEL option in the option list. Its default value is 0 but can be changed setting the session variable (in future versions the default are 5). The depth value is the number of sub-objects that are taken in the JPATH2 (this is different from what is defined and returned by the native function).

    For instance:

    This will define the table as:

    From Connect 1.07.0002

    From Connect 1.6:

    Until Connect 1.5:

    For columns that are a simple value, the Json path is the column name. This is the default when the Jpath option is not specified, so it was not specified for such columns. However, you can force discovery to specify it by setting the connect_all_path variable to 1 or ON. This can be useful if you plan to change the name of such columns and relieves you of manually specifying the path (otherwise it would default to the new name and cause the column to not or wrongly be found).

    Another problem is that CONNECT cannot guess what you want to do with arrays. Here the AUTHOR array is set to 0, which means that only its first value are retrieved unless you also had specified “Expand=AUTHOR” in the option list. But of course, you can replace it with anything else.

    This method can be used as a quick way to make a “template” table definition that can later be edited to make the desired definition. In particular, column names are constructed from all the object keys of their path in order to have distinct column names. This can be manually edited to have the desired names, provided their JPATH key names are not modified.

    DEPTH can also be given the value -1 to create only columns that are simple values (no array or object). It normally defaults to 0 but this can be modified setting the variable.

    Note: Since version 1.6.4, CONNECT eliminates columns that are “void” or whose type cannot be determined. For instance given the file sresto.json:

    Previously, when using discovery, creating the table by:

    The table was previously created as:

    The column “grades” was added because of the void array in line 2. Now this column is skipped and does not appear anymore (unless the option Accept=1 is added in the option list).

    JSON Catalogue Tables

    Another way to see JSON table column specifications is to use a catalogue table. For instance:

    which returns:

    From Connect 1.07.0002:

    column_name
    type
    size
    jpath

    From Connect 1.6:

    column_name
    type
    size
    jpath

    Until Connect 1.5:

    column_name
    type
    size
    jpath

    All this is mostly useful when creating a table on a remote file that you cannot easily see.

    Finding the table within a JSON file

    Given the file “facebook.json”:

    The table we want to analyze is represented by the array value of the “data” object. Here is how this is specified in the create table statement:

    From Connect 1.07.0002:

    From Connect 1.6:

    Until Connect 1.5:

    This is the object option that gives the Jpath of the table. Note also an alternate way to declare the array to be expanded by the expand option of the option_list.

    Because some string values contain a date representation, the corresponding columns are declared as datetime and the date format is specified for them.

    The Jpath of the object option has the same syntax as the column Jpath but of course all array steps must be specified using the [n] (until Connect 1.5) or n (from Connect 1.6) format.

    Note: This applies to the whole document for tables having PRETTY = 2 (see below). Otherwise, it applies to the document objects of each file records.

    JSON File Formats

    The examples we have seen so far are files that, even they can be formatted in different ways (blanks, tabs, carriage return and line feed are ignored when parsing them), respect the JSON syntax and are made of only one item (Object or Array). Like for XML files, they are entirely parsed and a memory representation is made used to process them. This implies that they are of reasonable size to avoid an out of memory condition. Tables based on such files are recognized by the option Pretty=2 that we did not specify above because this is the default.

    An alternate format, which is the format of exported MongoDB files, is a file where each row is physically stored in one file record. For instance:

    The original file, “cities.json”, has 29352 records. To base a table on this file we must specify the option Pretty=0 in the option list. For instance:

    From Connect 1.07.0002:

    From Connect 1.6:

    Until Connect 1.5:

    Note the use of [n] (until Connect 1.5) or n (from Connect 1.6) array specifications for the longitude and latitude columns.

    When using this format, the table is processed by CONNECT like a DOS, CSV or FMT table. Rows are retrieved and parsed by records and the table can be very large. Another advantage is that such a table can be indexed, which can be of great value for very large tables. The “distrib” option of the “state” column tells CONNECT to use block indexing when possible.

    For such tables – as well as for pretty=1 ones – the record size must be specified using the LRECL option. Be sure you don’t specify it too small as it is used to allocate the read/write buffers and the memory used for parsing the rows. If in doubt, be generous as it does not cost much in memory allocation.

    Another format exists, noted by Pretty=1, which is similar to this one but has some additions to represent a JSON array. A header and a trailer records are added containing the opening and closing square bracket, and all records but the last are followed by a comma. It has the same advantages for reading and updating, but inserting and deleting are executed in the pretty=2 way.

    Alternate Table Arrangement

    We have seen that the most natural way to represent a table in a JSON file is to make it on an array of objects. However, other possibilities exist. A table can be an array of arrays, a one column table can be an array of values, or a one row table can be just one object or one value. Single row tables are internally handled by adding a one value array around them.

    Let us see how to handle, for instance, a table that is an array of arrays. The file:

    A table can be created on this file as:

    From Connect 1.07.0002:

    From Connect 1.6:

    Until Connect 1.5:

    Columns are specified by their position in the row arrays. By default, this is zero-based but for this table the base was set to 1 by the Base option of the option list. Another new option in the option list is Jmode=1. It indicates what type of table this is. The Jmode values are:

    1. An array of objects. This is the default.

    2. An array of Array. Like this one.

    3. An array of values.

    When reading, this is not required as the type of the array items is specified for the columns; however, it is required when inserting new rows so CONNECT knows what to insert. For instance:

    After this, it is displayed as:

    a
    b
    c

    Unspecified array values are represented by their first element.

    Getting and Setting JSON Representation of a Column

    We have seen that columns corresponding to a Json object or array are retrieved by default as the concatenation of all its values separated by a blank. It is also possible to retrieve and display such column contains as the full JSON string corresponding to it in the JSON file. This is specified in the JPATH by a “*” where the object or array would be specified.

    Note: When having columns generated by discovery, this can be specified by adding the STRINGIFY option to ON or 1 in the option list.

    For instance:

    From Connect 1.07.0002:

    From Connect 1.6:

    Until Connect 1.5:

    Now the query:

    will return and display :

    json_Author

    Note: Prefixing the column name by json_ is optional but is useful when using the column as argument to Connect UDF functions, making it to be surely recognized as valid Json without aliasing.

    This also works on input, a column specified so that it can be directly set to a valid JSON string.

    This feature is of great value as we will see below.

    Create, Read, Update and Delete Operations on JSON Tables

    The SQL commands INSERT, UPDATE and DELETE are fully supported for JSON tables except those returned by REST queries. For INSERT and UPDATE, if the target values are simple values, there are no problems.

    However, there are some issues when the added or modified values are objects or arrays.

    Concerning objects, the same problems exist that we have already seen with the XML type. The added or modified object will have the format described in the table definition, which can be different from the one of the JSON file. Modifications should be done using a file specifying the full path of modified objects.

    New problems are raised when trying to modify the values of an array. Only updates can be done on the original table. First of all, for the values of the array to be distinct values, all update operations concerning array values must be done using a table expanding this array.

    For instance, to modify the authors of the biblio.json based table, the jsampex table must be used. Doing so, updating and deleting authors is possible using standard SQL commands. For example, to change the first name of Knab from François to John:

    However It would be wrong to do:

    Because this would change the first name of both authors as they share the same ISBN.

    Where things become more difficult is when trying to delete or insert an author of a book. Indeed, a delete command will delete the whole book and an insert command will add a new complete row instead of adding a new author in the same array. Here we are penalized by the SQL language that cannot give us a way to specify this. Something like:

    However this does not exist in SQL. Does this mean that it is impossible to do it? No, but it requires us to use a table specified on the same file but adapted to this task. One way to do it is to specify a table for which the authors are no more an expanded array. Supposing we want to add an author to the “XML en Action” book. We will do it on a table containing just the author(s) of that book, which is the second book of the table.

    From Connect 1.6:

    Until Connect 1.5

    The command:

    replies:

    FIRSTNAME
    LASTNAME

    It is a standard JSON table that is an array of objects in which we can freely insert or delete rows.

    We can check that this was done correctly by:

    This will display:

    ISBN
    Title
    AuthorFN
    AuthorLN
    Year

    Note: If this table were a big table with many books, it would be difficult to know what the order of a specific book is in the table. This can be found by adding a special ROWID column in the table.

    However, an alternate way to do it is by using direct JSON column representation as in the JSAMPLE2 table. This can be done by:

    Here, we didn't have to find the index of the sub array to modify. However, this is not quite satisfying because we had to manually write the whole JSON value to set to the json_Author column.

    Therefore we need specific functions to do so. They are introduced now.

    JSON User Defined Functions

    Although such functions written by other parties do exist,[] CONNECT provides its own UDFs that are specifically adapted to the JSON table type and easily available because, being inside the CONNECT library or DLL, they require no additional module to be loaded (see to make these functions in a separate library module).

    Here is the list of the CONNECT functions; more can be added if required.

    Name
    Type
    Return
    Description
    Added

    String values are mapped to JSON strings. These strings are automatically escaped to conform to the JSON syntax. The automatic escaping is bypassed when the value has an alias beginning with ‘json_’. This is automatically the case when a JSON UDF argument is another JSON UDF whose name begins with “json_” (not case sensitive). This is why all functions that do not return a Json item are not prefixed by “json_”.

    Argument string values, for some functions, can alternatively be json file names. When this is ambiguous, alias them as jfile_. Full path should be used because UDF functions has no means to know what the current database is. Apparently, when the file name path is not full, it is based on the MariaDB data directory but I am not sure it is always true.

    Numeric values are (big) integers, double floating point values or decimal values. Decimal values are character strings containing a numeric representation and are treated as strings. Floating point values contain a decimal point and/or an exponent. Integers are written without decimal points.

    To install these functions execute the following commands :[]

    Note

    Json function names are often written on this page with leading upper case letters for clarity. It is possible to do so in SQL queries because function names are case insensitive. However, when creating or dropping them, their names must match the case they are in the library module, which is in lower case.

    On Unix systems (from Connect 1.7.02):

    On Unix systems (from Connect 1.6):

    On Unix systems (until Connect 1.5):

    On WIndows (from Connect 1.7.02):

    On WIndows (from Connect 1.6):

    On WIndows (until Connect 1.5):

    Jfile_Bjson

    MariaDB starting with

    JFile_Bjson was introduced in MariaDB.

    Converts the first argument pretty=0 json file to Bjson file. B(inary)json is a pre-parsed json format. It is described below in the Performance chapter (available in next Connect versions).

    Jfile_Convert

    MariaDB starting with

    JFile_Convert was introduced in MariaDB.

    Converts the first argument json file to another pretty=0 json file. The third integer argument is the record length to use. This is often required to process huge json files that would be very slow if they were in pretty=2 format.

    This is done without completely parsing the file, is very fast and requires no big memory.

    Jfile_Make

    Jfile_Make was added in CONNECT 1.4

    The first argument must be a json item (if it is just a string, Jfile_Make will try its best to see if it is a json item or an input file name). The following arguments are a string file name and an integer pretty value (defaulting to 2) in any order. This function creates a json file containing the first argument item.

    The returned string value is the created file name. If not specified as an argument, the file name can in some cases be retrieved from the first argument; in such cases the file itself is modified.

    This function can be used to create or format a json file. For instance, supposing we want to format the file tb.json, this can be done with the query:

    The tb.json file are changed to:

    Json_Array_Add

    Note: The following describes this function for CONNECT version 1.4 only. The first argument must be a JSON array. The second argument is added as member of this array:

    Array

    Note: The first array is not escaped, its (alias) name beginning with ‘json_’.

    Now we can see how adding an author to the JSAMPLE2 table can alternatively be done:

    Note: Calling a column returning JSON a name prefixed by json_ (like json_author here) is good practice and removes the need to give it an alias to prevent escaping when used as an argument.

    Additional arguments: If a third integer argument is given, it specifies the position (zero based) of the added value:

    Array

    If a string argument is added, it specifies the Json path to the array to be modified. For instance:

    Json_Array_Add('{"a":1,"b":2,"c":[3, 4]}' json_, 5, 1, 'c')

    Json_Array_Add_Values

    Json_Array_Add_Values added in CONNECT 1.4 replaces the function Json_Array_Add of CONNECT version 1.3.

    The first argument must be a JSON array string. Then all other arguments are added as members of this array:

    Array

    Json_Array_Delete

    The first argument should be a JSON array. The second argument is an integer indicating the rank (0 based conforming to general json usage) of the element to delete:

    Array

    Now we can see how to delete the second author from the JSAMPLE2 table:

    A Json path can be specified as a third string argument

    Json_Array_Grp

    This is an aggregate function that makes an array filled from values coming from the rows retrieved by a query. Let us suppose we have the pet table:

    name
    race
    number

    The query:

    will return:

    name

    One problem with the JSON aggregate functions is that they construct their result in memory and cannot know the needed amount of storage, not knowing the number of rows of the used table.

    Therefore, the number of values for each group is limited. This limit is the value of JsonGrpSize whose default value is 10 but can be set using the JsonSet_Grp_Size function. Nevertheless, working on a larger table is possible, but only after setting JsonGrpSize to the ceiling of the number of rows per group for the table. Try not to set it to a very large value to avoid memory exhaustion.

    JsonContains

    This function can be used to check whether an item is contained in a document. Its arguments are the same than the ones of the JsonLocate function; only the return value changes. The integer returned value is 1 is the item is contained in the document or 0 otherwise.

    JsonContains_Path

    This function can be used to check whether a Json path is contained in the document. The integer returned value is 1 is the path is contained in the document or 0 otherwise.

    Json_File

    The first argument must be a file name. This function returns the text of the file that is supposed to be a json file. If only one argument is specified, the file text is returned without being parsed. Up to two additional arguments can be specified:

    A string argument is the path to the sub-item to be returned. An integer argument specifies the pretty format value of the file.

    This function is chiefly used to get the json item argument of other json functions from a json file. For instance, supposing the file tb.json is:

    Extracting a value from it can be done with a query such as:

    This query returns:

    Type

    However, we’ll see that, most of the time, it is better to use Jbin_File or to directly specify the file name in queries. In particular this function should not be used for queries that must modify the json item because, even if the modified json is returned, the file itself would be unchanged.

    Json_Get_Item

    Json_Get_Item was added in CONNECT 1.4.

    This function returns a subset of the json document passed as first argument. The second argument is the json path of the item to be returned and should be one returning a json item (terminated by a ‘*’). If not, the function will try to make it right but this is not foolproof. For instance:

    The correct path should have been ‘second.*’), but in this simple case the function was able to make it right. The returned item:

    item

    Note: The array is aliased “json_second” to indicate it is a json item and avoid escaping it. However, the “json_” prefix is skipped when making the object and must not be added to the path.

    JsonGet_Grp_Size

    This function returns the JsonGrpSize value.

    JsonGet_String / JsonGet_Int / JsonGet_Real

    JsonGet_String, JsonGet_Int and JsonGet_Real were added in CONNECT 1.4.

    The first argument should be a JSON item. If it is a string with no alias, it are converted as a json item. The second argument is the path of the item to be located in the first argument and returned, eventually converted according to the used function:

    This query returns:

    String
    Int
    Real

    The function JsonGet_Real can be given a third argument to specify the number of decimal digits of the returned value. For instance:

    This query returns:

    String

    The given path can specify all operators for arrays except the “expand” [*] operator). For instance:

    The result:

    Rank
    Number
    Concat
    Sum
    Avg

    Json_Item_Merge

    This function merges two arrays or two objects. For arrays, this is done by adding to the first array all the values of the second array. For instance:

    The function returns:

    Result

    For objects, the pairs of the second object are added to the first object if the key does not yet exist in it; otherwise the pair of the first object is set with the value of the matching pair of the second object. For instance:

    The function returns:

    Result

    JsonLocate

    The first argument must be a JSON tree. The second argument is the item to be located. The item to be located can be a constant or a json item. Constant values must be equal in type and value to be found. This is "shallow equality" – strings, integers and doubles won't match.

    This function returns the json path to the located item or null if it is not found:

    This query returns:

    Path

    The path syntax is the same used in JSON CONNECT tables.

    By default, the path of the first occurrence of the item is returned. The third parameter can be used to specify the occurrence whose path is to be returned. For instance:

    first
    second
    wrong type
    json

    For string items, the comparison is case sensitive by default. However, it is possible to specify a string to be compared case insensitively by giving it an alias beginning by “ci”:

    Path

    Json_Locate_All

    The first argument must be a JSON item. The second argument is the item to be located. This function returns the paths to all locations of the item as an array of strings:

    This query returns:

    All paths

    The returned array can be applied other functions. For instance, to get the number of occurrences of an item in a json tree, you can do:

    The displayed result:

    Nb of occurs

    If specified, the third integer argument set the depth to search in the document. This means the maximum items in the paths. This value defaults to 10 but can be increased for complex documents or reduced to set the maximum wanted depth of the returned paths.

    Json_Make_Array

    Json_Make_Array returns a string denoting a JSON array with all its arguments as members:

    Json_Make_Array(56, 3.1416, 'My name is "Foo"',N ULL)

    Note: The argument list can be void. If so, a void array is returned.

    Json_Make_Object

    Json_Make_Object returns a string denoting a JSON object. For instance:

    The object is filled with pairs corresponding to the given arguments. The key of each pair is made from the argument (default or specified) alias.

    Json_Make_Object(56, 3.1416, 'machin', NULL)

    When needed, it is possible to specify the keys by giving an alias to the arguments:

    Json_Make_Object(56 qty,3.1416 price,'machin' truc, NULL garanty)

    If the alias is prefixed by ‘json_’ (to prevent escaping) the key name is stripped from that prefix.

    This function is chiefly useful when entering values retrieved from a table, the key being by default the column name:

    Json_Make_Object(matricule, nom, titre, salaire)

    Json_Object_Add

    The first argument must be a JSON object. The second argument is added as a pair to this object:

    newobj

    Note: If the specified key already exists in the object, its value is replaced by the new one.

    The third string argument is a Json path to the target object.

    Json_Object_Delete

    The first argument must be a JSON object. The second argument is the key of the pair to delete:

    newobj

    The third string argument is a Json path to the object to be the target of deletion.

    Json_Object_Grp

    This function works like Json_Array_Grp. It makes a JSON object filled with value pairs whose keys are passed from its first argument and values are passed from its second argument.

    This can be seen with the query:

    This query returns:

    name
    json_object_grp(number,race)

    Json_Object_Key

    Return a string denoting a JSON object. For instance:

    The object is filled with pairs made from each key/value arguments.

    Json_Object_Key('qty', 56, 'price', 3.1416, 'truc', 'machin', 'garanty', NULL)

    Json_Object_List

    The first argument must be a JSON object. This function returns an array containing the list of all keys existing in the object:

    Key List

    Json_Object_Nonull

    This function works like but “null” arguments are ignored and not inserted in the object. Arguments are regarded as “null” if they are JSON null values, void arrays or objects, or arrays or objects containing only null members.

    It is mainly used to avoid constructing useless null items when converting tables (see later).

    Json_Object_Values

    The first argument must be a JSON object. This function returns an array containing the list of all values existing in the object:

    Value List

    JsonSet_Grp_Size

    This function is used to set the JsonGrpSize value. This value is used by the following aggregate functions as a ceiling value of the number of items in each group. It returns the JsonGrpSize value that can be its default value when passed 0 as argument.

    Json_Set_Item / Json_Insert_Item / Json_Update_Item

    These functions insert or update data in a JSON document and return the result. The value/path pairs are evaluated left to right. The document produced by evaluating one pair becomes the new value against which the next pair is evaluated.

    • Json_Set_Item replaces existing values and adds non-existing values.

    • Json_Insert_Item inserts values without replacing existing values.

    • Json_Update_Item replaces only existing values.

    Example:

    This query returns:

    Set
    Insert
    Update

    JsonValue

    Returns a JSON value as a string, for instance:

    JsonValue(3.1416)

    The “JBIN” return type

    Almost all functions returning a json string - whose name begins with Json_ - have a counterpart with a name beginning with Jbin_. This is both for performance (speed and memory) as well as for better control of what the functions should do.

    This is due to the way CONNECT UDFs work internally. The Json functions, when receiving json strings as parameters, parse them and construct a binary tree in memory. They work on this tree and before returning; serialize this tree to return a new json string.

    If the json document is large, this can take up a large amount of time and storage space. It is all right when one simple json function is called – it must be done anyway – but is a waste of time and memory when json functions are used as parameters to other json functions.

    To avoid multiple serializing and parsing, the Jbin functions should be used as parameters to other functions. Indeed, they do not serialize the memory document tree, but return a structure allowing the receiving function to have direct access to the memory tree. This saves the serialize-parse steps otherwise needed to pass the argument and removes the need to reallocate the memory of the binary tree, which by the way is 6 to 7 times the size of the json string. For instance:

    This query returns:

    Result

    Here the binary json tree allocated by Jbin_Array is completed by Jbin_Array_Add and Json_Object and serialized only once to make the final result string. It would be serialized and parsed two more times if using “Json” functions.

    Note that Jbin results are recognized as such because they are aliased beginning with “Jbin_”. This is why in the Json_Object function the alias is specified as “Jbin_foo”.

    What happens if it is not recognized as such? These functions are declared as returning a string and to take care of this, the returned structure begins with a zero-terminated string. For instance:

    This query replies:

    Jbin_Array('a','b','c')

    Note: When testing, the tree returned by a “Jbin” function can be seen using the Json_Serialize function whose unique parameter must be a “Jbin” result. For instance:

    This query returns:

    Json_Serialize(Jbin_Array('a','b','c'))

    Note: For this simple example, this is equivalent to using the Json_Array function.

    Using a file as json UDF first argument

    We have seen that many json UDFs can have an additional argument not yet described. This is in the case where the json item argument was referring to a file. Then the additional integer argument is the pretty value of the json file. It matters only when the first argument is just a file name (to make the UDF understand this argument is a file name, it should be aliased with a name beginning with jfile_) or if the function modifies the file, in which case it are rewritten with this pretty format.

    The json item is created by extracting the required part from the file. This can be the whole file but more often only some of it. There are two ways to specify the sub-item of the file to be used:

    1. Specifying it in the Json_File or Jbin_File arguments.

    2. Specifying it in the receiving function (not possible for all functions).

    It doesn’t make any difference when the Jbin_File is used but it does with Json_File. For instance:

    The second query returns:

    Json_Array_Add(Json_File('test.json', 'b'), 66)

    It just returns the – modified -- subset returned by the Json_File function, while the query:

    returns what was received from Json_File with the modification made on the subset.

    Json_Array_Add(Json_File('test.json'), 66, 'b')

    Note that in both case the test.json file is not modified. This is because the Json_File function returns a string representing all or part of the file text but no information about the file name. This is all right to check what would be the effect of the modification to the file.

    However, to have the file modified, use the Jbin_File function or directly give the file name. Jbin_File returns a structure containing the file name, a pointer to the file parsed tree and eventually a pointer to the subset when a path is given as a second argument:

    This query returns:

    Json_Array_Add(Jbin_File('test.json', 'b'), 66)

    This time the file is modified. This can be checked with:

    Json_File('test.json', 3)

    The reason why the first argument is returned by such a query is because of tables such as:

    In this table, the jfile_cols column just contains a file name. If we update it by:

    This is the test.json file that must be modified, not the jfile_cols column. This can be checked by:

    JsonGet_String(jfile_cols, '[1]:*')

    Note: It was an important facility to name the second column of the table beginning by “jfile_” so the json functions knew it was a file name without obliging to specify an alias in the queries.

    Using “Jbin” to control what the query execution does

    This is applying in particular when acting on json files. We have seen that a file was not modified when using the Json_File function as an argument to a modifying function because the modifying function just received a copy of the json file. This is not true when using the Jbin_File function that does not serialize the binary document and make it directly accessible. Also, as we have seen earlier, json functions that modify their first file parameter modify the file and return the file name. This is done by directly serializing the internal binary document as a file.

    However, the “Jbin” counterpart of these functions does not serialize the binary document and thus does not modify the json file. For example let us compare these two queries:

    /* First query */

    /* Second query */

    Both queries return:

    Result

    In the first query Jbin_Object_Add does not serialize the document (no “Jbin” functions do) and Json_Object just returns a serialized modified tree. Consequently, the file bt2.json is not modified. This query is all right to copy a modified version of the json file without modifying it.

    However, in the second query Json_Object_Add does modify the json file and returns the file name. The Json_Object function receives this file name, reads and parses the file, makes an object from it and returns the serialized result. This modification can be done willingly but can be an unwanted side effect of the query.

    Therefore, using “Jbin” argument functions, in addition to being faster and using less memory, are also safer when dealing with json files that should not be modified.

    Using JSON as Dynamic Columns

    The JSON nosql language has all the features to be used as an alternative to dynamic columns. For instance, take the following example of dynamic columns:

    /* Remove a column: */

    /* Add a column: */

    /* You can also list all columns, or get them together with their values in JSON format: */

    The same result can be obtained with json columns using the json UDF’s:

    /* JSON equivalent */

    /* Remove a column: */

    /* Add a column */

    /* You can also list all columns, or get them together with their values in JSON format: */

    However, using JSON brings features not existing in dynamic columns:

    • Use of a language used by many implementation and developers.

    • Full support of arrays, currently missing from dynamic columns.

    • Access of subpart of json by JPATH that can include calculations on arrays.

    • Possible references to json files.

    With more experience, additional UDFs can be easily written to support new needs.

    New Set of BSON Functions

    All these functions have been rewritten using the new JSON handling way and are temporarily available changing the J starting name to B. Then Json_Make_Array new style is called using Bson_Make_Array. Some, such as Bson_Item_Delete, are new and some fix bugs found in their Json counterpart.

    Converting Tables to JSON

    The JSON UDF’s and the direct Jpath “*” facility are powerful tools to convert table and files to the JSON format. For instance, the file biblio3.json we used previously can be obtained by converting the xsample.xml file. This can be done like this:

    From Connect 1.07.0002

    Before Connect 1.07.0002

    And then :

    The xj1 table rows will directly receive the Json object made by the select statement used in the insert statement and the table file are made as shown (xj1 is pretty=2 by default) Its mode is Jmode=2 because the values inserted are strings even if they denote json objects.

    Another way to do this is to create a table describing the file format we want before the biblio3.json file existed:

    From Connect 1.07.0002

    Before Connect 1.07.0002

    and to populate it by:

    This is a simpler method. However, the issue is that this method cannot handle the multiple column values. This is why we inserted from xsampall not from xsampall2. How can we add the missing multiple authors in this table? Here again we must create a utility table able to handle JSON strings. From Connect 1.07.0002

    Before Connect 1.07.0002

    Voilà !

    Converting json files

    We have seen that json files can be formatted differently depending on the pretty option. In particular, big data files should be formatted with pretty equal to 0 when used by a CONNECT json table. The best and simplest way to convert a file from one format to another is to use the Jfile_Make function. Indeed this function makes a file of specified format using the syntax:

    The file name is optional when the json document comes from a Jbin_File function because the returned structure makes it available. For instance, to convert back the json file tb.json to pretty= 0, this can be simply done by:

    Performance Consideration

    MySQL and PostgreSQL have a JSON data type that is not just text but an internal encoding of JSON data. This is to save parsing time when executing JSON functions. Of course, the parse must be done anyway when creating the data and serializing must be done to output the result.

    CONNECT directly works on character strings impersonating JSON values with the need of parsing them all the time but with the advantage of working easily on external data. Generally, this is not too penalizing because JSON data are often of some or reasonable size. The only case where it can be a serious problem is when working on a big JSON file.

    Then, the file should be formatted or converted to pretty=0.

    From Connect 1.7.002, this easily done using the Jfile_Convert function, for instance:

    Such a json file should not be used directly by JSON UDFs because they parse the whole file, even when only a subset is used. Instead, it should be used by a JSON table created on it. Indeed, JSON tables do not parse the whole document but just the item corresponding to the row they are working on. In addition, indexing can be used by the table as explained previously on this page.

    Generally speaking, the maximum flexibility offered by CONNECT is by using JSON tables and JSON UDFs together. Some things are better handled by tables, other by UDFs. The tools are there but it is up to you to discover the best way to resolve your problems.

    Bjson files

    Starting with Connect 1.7.002, pretty=0 json files can be converted to a binary format that is a pre-parsed representation of json. This can be done with the Jfile_Bjson UDF function, for instance:

    Here the third argument, the record length, must 6 to 10 times larger than the lrecl of the initial json file because the parsed representation is bigger than the original json text representation.

    Tables using such Bjson files must specify ‘Pretty=-1’ in the option list.

    It is probably similar to the BSON used by MongoDB and PostgreSQL and permits to process queries up to 10 times faster than working on text json files. Indexing is also available for tables using this format making even more performance improvement. For instance, some queries on a json table of half a million rows, that were previously done in more than 10 seconds, took only 0.1 second when converted and indexed.

    Here again, this has been remade to use the new way Json is handled. The files made using the bfile_bjson function are only from two to four times the size of the source files. This new representation is not compatible with the old one. Therefore, these files must be used with BSON tables only.

    Specifying a JSON table Encoding

    An important feature of JSON is that strings should in UNICODE. As a matter of fact, all examples we have found on the Internet seemed to be just ASCII. This is because UNICODE is generally encoded in JSON files using UTF8 or UTF16 or UTF32.

    To specify the required encoding, just use the data_charset CONNECT option or the native DEFAULT CHARSET option.

    Retrieving JSON data from MongoDB

    Classified as a NoSQL database program, MongoDB uses JSON-like documents (BSON) grouped in collections. The simplest way, and only method available before Connect 1.6, to access MongoDB data was to export a collection to a JSON file. This produces a file having the pretty=0 format. Viewed as SQL, a collection is a table and documents are table rows.

    Since CONNECT version 1.6, it is now possible to directly access MongoDB collections via their MongoDB C Driver. This is the purpose of the MONGO table type described later. However, JSON tables can also do it in a somewhat different way (providing MONGO support is installed as described for MONGO tables).

    It is achieved by specifying the MongoDB connection URI while creating the table. For instance:

    From Connect 1.7.002

    Before Connect 1.7.002

    In this statement, the file_name option was replaced by the connection option. It is the URI enabling to retrieve data from a local or remote MongoDB server. The tabname option is the name of the MongoDB collection that are used and the dbname option could have been used to indicate the database containing the collection (it defaults to the current database).

    The way it works is that the documents retrieved from MongoDB are serialized and CONNECT uses them as if they were read from a file. This implies serializing by MongoDB and parsing by CONNECT and is not the best performance wise. CONNECT tries its best to reduce the data transfer when a query contains a reduced column list and/or a where clause. This way makes all the possibilities of the JSON table type available, such as calculated arrays.

    However, to work on large JSON collations, using the MONGO table type is generally the normal way.

    Note: JSON tables using the MongoDB access accept the specific MONGO options , and . They are described in the MONGO table chapter.

    Summary of Options and Variables Used with Json Tables

    Options and variables that can be used when creating Json tables are listed here:

    Table Option
    Type
    Description

    (*) For Json tables connected to MongoDB, Mongo specific options can also be used.

    Other options must be specified in the option list:

    Table Option
    Type
    Description

    Column options:

    Column Option
    Type
    Description

    Variables used with Json tables are:

    Notes

    1. The value n can be 0 based or 1 based depending on the base table option. The default is 0 to match what is the current usage in the Json world but it can be set to 1 for tables created in old versions.

    2. See for instance: , and

    3. This will not work when CONNECT is compiled embedded

    This page is licensed: CC BY-SA / Gnu FDL

    XML en Action

    William J.

    Pardi

    1999

    [+]

    Numeric

    Make the sum of all the non-null array values.

    [x] (Connect >= 1.6), [*] (Connect <= 1.5)

    Numeric

    Make the product of all non-null array values.

    [!]

    Numeric

    Make the average of all the non-null array values.

    [>] or [<]

    All

    Return the greatest or least non-null value of the array.

    [#]

    All

    N.A

    Return the number of values in the array.

    []

    All

    Expand if under an expanded object. Otherwise sum if numeric, else concatenation separated by “, “.

    All

    Between two separators, if an array, expand it if under an expanded object or take the first value of it.

    Joe

    3

    Car

    20.00

    Joe

    4

    Beer

    19.00

    Joe

    4

    Beer

    16.00

    Joe

    4

    Food

    17.00

    Joe

    4

    Food

    17.00

    Joe

    4

    Beer

    14.00

    Joe

    5

    Beer

    14.00

    Joe

    5

    Food

    12.00

    Beth

    3

    Beer

    16.00

    Beth

    4

    Food

    17.00

    Beth

    4

    Beer

    15.00

    Beth

    5

    Food

    12.00

    Beth

    5

    Beer

    20.00

    Janet

    3

    Car

    19.00

    Janet

    3

    Food

    18.00

    Janet

    3

    Beer

    18.00

    Janet

    4

    Car

    17.00

    Janet

    5

    Beer

    14.00

    Janet

    5

    Car

    12.00

    Janet

    5

    Beer

    19.00

    Janet

    5

    Food

    12.00

    5

    Beer, Food

    26.00

    13.00

    Beth

    3

    Beer

    16.00

    16.00

    Beth

    4

    Food, Beer

    32.00

    16.00

    Beth

    5

    Food, Beer

    32.00

    16.00

    Janet

    3

    Car, Food, Beer

    55.00

    18.33

    Janet

    4

    Car

    17.00

    17.00

    Janet

    5

    Beer, Car, Beer, Food

    57.00

    14.25

    Beth

    3, 4, 5

    16.00+32.00+32.00

    80.00

    16.00+16.00+16.00

    48.00

    26.67

    16.00

    Janet

    3, 4, 5

    55.00+17.00+57.00

    129.00

    18.33+17.00+14.25

    49.58

    43.00

    16.12

    Column 5 shows the week’s expense averages.
  • Column 6 calculates the sum of these averages.

  • Column 7 calculates the average of the week’s sum of expenses.

  • Column 8 calculates the average expense by person.

  • AUTHOR_FIRSTNAME

    CHAR

    15

    $.AUTHOR[0].FIRSTNAME

    AUTHOR_LASTNAME

    CHAR

    8

    $.AUTHOR[0].LASTNAME

    TITLE

    CHAR

    30

    $.TITLE

    TRANSLATED_PREFIX

    CHAR

    23

    $.TRANSLATED.PREFIX

    TRANSLATED_TRANSLATOR_FIRSTNAME

    CHAR

    5

    $TRANSLATED.TRANSLATOR.FIRSTNAME

    TRANSLATED_TRANSLATOR_LASTNAME

    CHAR

    6

    $.TRANSLATED.TRANSLATOR.LASTNAME

    PUBLISHER_NAME

    CHAR

    15

    $.PUBLISHER.NAME

    PUBLISHER_PLACE

    CHAR

    5

    $.PUBLISHER.PLACE

    DATEPUB

    INTEGER

    4

    $.DATEPUB

    AUTHOR_FIRSTNAME

    CHAR

    15

    AUTHOR..FIRSTNAME

    AUTHOR_LASTNAME

    CHAR

    8

    AUTHOR..LASTNAME

    TITLE

    CHAR

    30

    TRANSLATED_PREFIX

    CHAR

    23

    TRANSLATED.PREFIX

    TRANSLATED_TRANSLATOR_FIRSTNAME

    CHAR

    5

    TRANSLATED.TRANSLATOR.FIRSTNAME

    TRANSLATED_TRANSLATOR_LASTNAME

    CHAR

    6

    TRANSLATED.TRANSLATOR.LASTNAME

    PUBLISHER_NAME

    CHAR

    15

    PUBLISHER.NAME

    PUBLISHER_PLACE

    CHAR

    5

    PUBLISHER.PLACE

    DATEPUB

    INTEGER

    4

    AUTHOR_FIRSTNAME

    CHAR

    15

    AUTHOR::FIRSTNAME

    AUTHOR_LASTNAME

    CHAR

    8

    AUTHOR::LASTNAME

    TITLE

    CHAR

    30

    TRANSLATED_PREFIX

    CHAR

    23

    TRANSLATED:PREFIX

    TRANSLATED_TRANSLATOR_FIRSTNAME

    CHAR

    5

    TRANSLATED:TRANSLATOR:FIRSTNAME

    TRANSLATED_TRANSLATOR_LASTNAME

    CHAR

    6

    TRANSLATED:TRANSLATOR:LASTNAME

    PUBLISHER_NAME

    CHAR

    15

    PUBLISHER:NAME

    PUBLISHER_PLACE

    CHAR

    5

    PUBLISHER:PLACE

    DATEPUB

    INTEGER

    4

    7

    sept

    0.7700

    8

    huit

    13.0000

    25

    Breakfast

    1.4140

    XML en Action

    William J.

    Pardi

    1999

    9782840825685

    XML en Action

    Charles

    Dickens

    1999

    Function

    STRING*

    Adds to its first array argument all following arguments.

    jbin_array_delete

    Function

    STRING*

    Deletes the nth element of its first array argument.

    jbin_file

    Function

    STRING*

    Returns of a (json) file contain.

    jbin_get_item

    Function

    STRING*

    Access and returns a json item by a JPATH key.

    jbin_insert_item

    Function

    STRING

    Insert item values located to paths.

    jbin_item_merge

    Function

    STRING*

    Merges two arrays or two objects.

    jbin_object

    Function

    STRING*

    Make a JSON object containing its arguments.

    jbin_object_nonull

    Function

    STRING*

    Make a JSON object containing its not null arguments.

    jbin_object_add

    Function

    STRING*

    Adds to its first object argument its second argument.

    jbin_object_delete

    Function

    STRING*

    Deletes the nth element of its first object argument.

    jbin_object_key

    Function

    STRING*

    Make a JSON object for key/value pairs.

    jbin_object_list

    Function

    STRING*

    Returns the list of object keys as an array.

    jbin_set_item

    Function

    STRING

    Set item values located to paths.

    jbin_update_item

    Function

    STRING

    Update item values located to paths.

    jfile_bjson

    Function

    STRING

    Convert a pretty=0 file to another BJson file.

    , , ,

    jfile_convert

    Function

    STRING

    Convert a Json file to another pretty=0 file.

    , , ,

    jfile_make

    Function

    STRING

    Make a json file from its json item first argument.

    json_array

    Function

    STRING

    Make a JSON array containing its arguments.

    until Connect 1.5

    json_array_add

    Function

    STRING

    Adds to its first array argument its second arguments (before , all following arguments).

    json_array_add_values

    Function

    STRING

    Adds to its first array argument all following arguments.

    json_array_delete

    Function

    STRING

    Deletes the nth element of its first array argument.

    json_array_grp

    Aggregate

    STRING

    Makes JSON arrays from coming argument.

    json_file

    Function

    STRING

    Returns the contains of (json) file.

    json_get_item

    Function

    STRING

    Access and returns a json item by a JPATH key.

    json_insert_item

    Function

    STRING

    Insert item values located to paths.

    json_item_merge

    Function

    STRING

    Merges two arrays or two objects.

    json_locate_all

    Function

    STRING

    Returns the JPATH’s of all occurrences of an element.

    json_make_array

    Function

    STRING

    Make a JSON array containing its arguments.

    From Connect 1.6

    json_make_object

    Function

    STRING

    Make a JSON object containing its arguments.

    From Connect 1.6

    json_object

    Function

    STRING

    Make a JSON object containing its arguments.

    until Connect 1.5

    json_object_delete

    Function

    STRING

    Deletes the nth element of its first object argument.

    json_object_grp

    Aggregate

    STRING

    Makes JSON objects from coming arguments.

    json_object_list

    Function

    STRING

    Returns the list of object keys as an array.

    json_object_nonull

    Function

    STRING

    Make a JSON object containing its not null arguments.

    json_serialize

    Function

    STRING

    Serializes the return of a “Jbin” function.

    json_set_item

    Function

    STRING

    Set item values located to paths.

    json_update_item

    Function

    STRING

    Update item values located to paths.

    jsonvalue

    Function

    STRING

    Make a JSON value from its unique argument. Called json_value until and .

    jsoncontains

    Function

    INTEGER

    Returns 0 or 1 if an element is contained in the document.

    jsoncontains_path

    Function

    INTEGER

    Returns 0 or 1 if a JPATH is contained in the document.

    jsonget_string

    Function

    STRING

    Access and returns a string element by a JPATH key.

    jsonget_int

    Function

    INTEGER

    Access and returns an integer element by a JPATH key.

    jsonget_real

    Function

    REAL

    Access and returns a real element by a JPATH key.

    jsonlocate

    Function

    STRING

    Returns the JPATH to access one element.

    Lisbeth

    rabbit

    2

    Kevin

    cat

    2

    Kevin

    bird

    6

    Donald

    dog

    1

    Donald

    fish

    3

    LRECL

    Number

    The file record size for pretty < 2 json files.

    HTTP

    String

    The HTTP of the server of REST queries.

    URI

    String

    THE URI of REST queries

    CONNECTION*

    String

    Specifies a connection to MONGODB.

    ZIPPED

    Boolean

    True if the json file(s) is/are zipped in one or several zip files.

    MULTIPLE

    Number

    Used to specify a multiple file table.

    SEP_CHAR

    String

    Set it to ‘:’ for old tables using the old json path syntax.

    CATFUNC

    String

    The catalog function (column) used when creating a catalog table.

    OPTION_LIST

    String

    Used to specify all other options listed below.

    BASE

    Number

    The numbering base for arrays: 0 (the default) or 1.

    LIMIT

    Number

    The maximum number of array values to use when concatenating, calculating or expanding arrays. Defaults to 50 (>= Connect 1.7.0003), 10 (<= Connect 1.7.0002).

    FULLARRAY

    Boolean

    Used when creating with Discovery. Make a column for each value of arrays (up to LIMIT).

    JMODE

    Number

    The Json mode (array of objects, array of arrays, or array of values) Only used when inserting new rows.

    ACCEPT

    Boolean

    Keep null columns (for discovery).

    AVGLEN

    Number

    An estimate average length of rows. This is used only when indexing and can be set if indexing fails by miscalculating the table max size.

    STRINGIFY

    String

    Ask discovery to make a column to return the Json representation of this object.

    9782212090819

    Jean-Christophe Bernadac

    Construire une application XML

    Eyrolles Paris

    9782840825685

    William J. Pardi

    XML en Action

    Microsoft Press Pari

    Construire une application XML

    Jean-Christophe Bernadac and François Knab

    Eyrolles

    Paris

    XML en Action

    William J. Pardi

    Microsoft Press

    Paris

    9782212090819

    Construire une application XML

    Jean-Christophe

    Bernadac

    1999

    9782212090819

    Construire une application XML

    François

    Knab

    1999

    n (Connect >= 1.6) or [n][1]

    All

    N.A

    Take the nth value of the array.

    [*] (Connect >= 1.6), [X] or [x] (Connect <= 1.5)

    All

    Expand. Generate one row for each array value.

    ["string"]

    String

    Joe

    3

    Beer

    18.00

    Joe

    3

    Food

    12.00

    Joe

    3

    Food

    Joe

    3

    Beer, Food, Food, Car

    69.00

    17.25

    Joe

    4

    Beer, Beer, Food, Food, Beer

    83.00

    16.60

    Joe

    3, 4, 5

    69.00+83.00+26.00

    178.00

    17.25+16.60+13.00

    46.85

    59.33

    16.18

    ISBN

    CHAR

    13

    $.ISBN

    LANG

    CHAR

    2

    $.LANG

    SUBJECT

    CHAR

    12

    ISBN

    CHAR

    13

    LANG

    CHAR

    2

    SUBJECT

    CHAR

    12

    ISBN

    CHAR

    13

    LANG

    CHAR

    2

    SUBJECT

    CHAR

    12

    56

    Coucou

    500.0000

    2

    Hello World

    2.0316

    1784

    John Doo

    32.4500

    1914

    Nabucho

    5.1200

    [{"FIRSTNAME":"Jean-Christophe","LASTNAME":"Bernadac"},{"FIRSTNAME":"François","LASTNAME":"Knab"}]

    [{"FIRSTNAME":"William J.","LASTNAME":"Pardi"}]

    William J.

    Pardi

    9782212090819

    Construire une application XML

    Jean-Christophe

    Bernadac

    1999

    9782212090819

    Construire une application XML

    John

    Knab

    1999

    jbin_array

    Function

    STRING*

    Make a JSON array containing its arguments.

    jbin_array_add

    Function

    STRING*

    Adds to its first array argument its second arguments.

    [56,3.141600,"machin",null,"One more"]

    [5,3,4,8,7,9]

    {"a":1,"b":2,"c":[3,5,4]}

    [56,3.141600,"machin",null,"One more","Two more"]

    [56,"foo",null]

    John

    dog

    2

    Bill

    cat

    1

    Mary

    dog

    1

    Mary

    cat

    1

    Bill

    Donald

    John

    Kevin

    Lisbeth

    Mary

    car

    ["a",33]

    29.50

    29

    29.500000000000000

    29.50

    89

    5

    45,28,36,45,89

    243

    48.60

    ["a","b","c","d","e","f"]

    {"a":1,"b":5,"c":3,"d":4,"f":6}

    $.AUTHORS[1].FN

    $[0]

    $[2][1]

    $[2]

    $.AUTHORS[0].LN

    ["$[0][0]","$[1][0][1]"]

    2

    [56,3.141600,"My name is "Foo"",null]

    {"56":56,"3.1416":3.141600,"machin":"machin","NULL":null}

    {"qty":56,"price":3.141600,"truc":"machin","garanty":null}

    {"matricule":40567,"nom":"PANTIER","titre":"DIRECTEUR","salaire":14000.000000}

    {"item":"T-shirt","qty":27,"price":24.990000,"color":"blue"}

    {"item":"T-shirt","price":24.99}

    Bill

    {"cat":1}

    Donald

    {"dog":1,"fish":3}

    John

    {"dog":2}

    Kevin

    {"cat":2,"bird":6}

    Lisbeth

    {"rabbit":2}

    Mary

    {"dog":1,"cat":1}

    {"qty":56,"price":3.141600,"truc":"machin","garanty":null}

    ["qty","price","truc","garanty"]

    [1,2,3]

    [1,"foo",3,{"quatre":4,"cinq":5}]

    [1,2,3,{"quatre":4,"cinq":5}]

    [1,"foo",3,{"quatre":4}]

    3.141600

    {"foo":["a","b","c","d"]}

    Binary Json array

    ["a","b","c"]

    [44,55,66]

    {"a":1,"b":[44,55,66]}

    test.json

    {"a":1,"b":[44,55,66]}

    {"a":1,"b":[44,55,66]}

    {"bt1":{"a":1,"b":2,"c":3,"d":4}}

    ENGINE

    String

    Must be specified as CONNECT.

    TABLE_TYPE

    String

    Must be JSON or BSON.

    FILE_NAME

    String

    The optional file (path) name of the Json file. Can be absolute or relative to the current data directory. If not specified, it defaults to the table name and json file type.

    DATA_CHARSET

    String

    DEPTHLEVEL

    Number

    Specifies the depth in the document CONNECT looks when defining columns by discovery or in catalog tables

    PRETTY

    Number

    Specifies the format of the Json file (-1 for Bjson files)

    EXPAND

    String

    The name of the column to expand.

    OBJECT

    String

    JPATHFIELD_FORMAT

    String

    Defaults to the column name.

    DATE_FORMAT

    String

    Specifies the date format into the Json file when defining a DATE, DATETIME or TIME column.

    connect_force_bson
    found here
    connect_json_null
    connect_default_depth
    Json_Depth
    connect_default_depth
    2
    CONNECT - Compiling JSON UDFs in a Separate Library
    3
    Json_Make_Object
    colist
    filter
    pipeline
    connect_default_depth
    connect_json_null
    connect_json_all_path
    connect_force_bson
    ↑
    ↑
    json-functions
    lib_mysqludf_json#readme
    json_udf_functions_version_04
    ↑

    9782840825685

    Concatenate all values separated by the specified string.

    19.00

    Joe

    $.SUBJECT

    9782840825685

    jbin_array_add_values

    Set it to ‘utf8’ for most Unicode Json documents.

    The json path of the sub-document used for the table.

    [
      {
        "ISBN": "9782212090819",
        "LANG": "fr",
        "SUBJECT": "applications",
        "AUTHOR": [
          {
            "FIRSTNAME": "Jean-Christophe",
            "LASTNAME": "Bernadac"
          },
          {
            "FIRSTNAME": "François",
            "LASTNAME": "Knab"
          }
        ],
        "TITLE": "Construire une application XML",
        "PUBLISHER": {
          "NAME": "Eyrolles",
          "PLACE": "Paris"
        },
        "DATEPUB": 1999
      },
      {
        "ISBN": "9782840825685",
        "LANG": "fr",
        "SUBJECT": "applications",
        "AUTHOR": [
          {
            "FIRSTNAME": "William J.",
            "LASTNAME": "Pardi"
          }
        ],
        "TITLE": "XML en Action",
        "TRANSLATED": {
           "PREFIX": "adapté de l'anglais par",
           "TRANSLATOR": {
              "FIRSTNAME": "James",
            "LASTNAME": "Guerin"
            }
        },
        "PUBLISHER": {
          "NAME": "Microsoft Press",
          "PLACE": "Paris"
        },
        "DATEPUB": 1999
      }
    ]
    CREATE TABLE jsample (
    ISBN CHAR(15),
    LANG CHAR(2),
    SUBJECT CHAR(32),
    AUTHOR CHAR(128),
    TITLE CHAR(32),
    TRANSLATED CHAR(80),
    PUBLISHER CHAR(20),
    DATEPUB INT(4))
    ENGINE=CONNECT table_type=JSON
    File_name='biblio3.json';
    SELECT isbn, author, title, publisher FROM jsample;
    CREATE TABLE jsampall (
    ISBN CHAR(15),
    LANGUAGE CHAR(2) field_format='LANG',
    Subject CHAR(32) field_format='SUBJECT',
    Author CHAR(128) field_format='AUTHOR:[" and "]',
    Title CHAR(32) field_format='TITLE',
    TRANSLATION CHAR(32) field_format='TRANSLATOR:PREFIX',
    Translator CHAR(80) field_format='TRANSLATOR',
    Publisher CHAR(20) field_format='PUBLISHER:NAME',
    LOCATION CHAR(16) field_format='PUBLISHER:PLACE',
    YEAR INT(4) field_format='DATEPUB')
    ENGINE=CONNECT table_type=JSON File_name='biblio3.json';
    CREATE TABLE jsampall (
    ISBN CHAR(15),
    LANGUAGE CHAR(2) field_format='LANG',
    Subject CHAR(32) field_format='SUBJECT',
    Author CHAR(128) field_format='AUTHOR.[" and "]',
    Title CHAR(32) field_format='TITLE',
    TRANSLATION CHAR(32) field_format='TRANSLATOR.PREFIX',
    Translator CHAR(80) field_format='TRANSLATOR',
    Publisher CHAR(20) field_format='PUBLISHER.NAME',
    LOCATION CHAR(16) field_format='PUBLISHER.PLACE',
    YEAR INT(4) field_format='DATEPUB')
    ENGINE=CONNECT table_type=JSON File_name='biblio3.json';
    CREATE TABLE jsampall (
    ISBN CHAR(15),
    LANGUAGE CHAR(2) jpath='$.LANG',
    Subject CHAR(32) jpath='$.SUBJECT',
    Author CHAR(128) jpath='$.AUTHOR[" and "]',
    Title CHAR(32) jpath='$.TITLE',
    TRANSLATION CHAR(32) jpath='$.TRANSLATOR.PREFIX',
    Translator CHAR(80) jpath='$.TRANSLATOR',
    Publisher CHAR(20) jpath='$.PUBLISHER.NAME',
    LOCATION CHAR(16) jpath='$.PUBLISHER.PLACE',
    YEAR INT(4) jpath='$.DATEPUB')
    ENGINE=CONNECT table_type=JSON File_name='biblio3.json';
    SELECT title, author, publisher, LOCATION FROM jsampall;
    CREATE TABLE jsampex (
    ISBN CHAR(15),
    Title CHAR(32) field_format='TITLE',
    AuthorFN CHAR(128) field_format='AUTHOR:[X]:FIRSTNAME',
    AuthorLN CHAR(128) field_format='AUTHOR:[X]:LASTNAME',
    YEAR INT(4) field_format='DATEPUB')
    ENGINE=CONNECT table_type=JSON File_name='biblio3.json';
    CREATE TABLE jsampex (
    ISBN CHAR(15),
    Title CHAR(32) field_format='TITLE',
    AuthorFN CHAR(128) field_format='AUTHOR.[X].FIRSTNAME',
    AuthorLN CHAR(128) field_format='AUTHOR.[X].LASTNAME',
    YEAR INT(4) field_format='DATEPUB')
    ENGINE=CONNECT table_type=JSON File_name='biblio3.json';
    CREATE TABLE jsampex (
    ISBN CHAR(15),
    Title CHAR(32) field_format='TITLE',
    AuthorFN CHAR(128) field_format='AUTHOR[*].FIRSTNAME',
    AuthorLN CHAR(128) field_format='AUTHOR[*].LASTNAME',
    YEAR INT(4) field_format='DATEPUB')
    ENGINE=CONNECT table_type=JSON File_name='biblio3.json';
    CREATE TABLE jsampex (
    ISBN CHAR(15),
    Title CHAR(32) jpath='TITLE',
    AuthorFN CHAR(128) jpath='AUTHOR[*].FIRSTNAME',
    AuthorLN CHAR(128) jpath='AUTHOR[*].LASTNAME',
    YEAR INT(4) jpath='DATEPUB')
    ENGINE=CONNECT table_type=JSON File_name='biblio3.json';
    AUTHOR:[1]:LASTNAME
    OPTION_LIST='Expand=AUTHOR'
    CREATE TABLE jexpall (
    WHO CHAR(12),
    WEEK INT(2) jpath='$.WEEK[*].NUMBER',
    WHAT CHAR(32) jpath='$.WEEK[*].EXPENSE[*].WHAT',
    AMOUNT DOUBLE(8,2) jpath='$.WEEK[*].EXPENSE[*].AMOUNT')
    ENGINE=CONNECT table_type=JSON File_name='expense.json';
    CREATE TABLE jexpall (
    WHO CHAR(12),
    WEEK INT(2) field_format='$.WEEK[*].NUMBER',
    WHAT CHAR(32) field_format='$.WEEK[*].EXPENSE[*].WHAT',
    AMOUNT DOUBLE(8,2) field_format='$.WEEK[*].EXPENSE[*].AMOUNT')
    ENGINE=CONNECT table_type=JSON File_name='expense.json';
    CREATE TABLE jexpall (
    WHO CHAR(12),
    WEEK INT(2) field_format='WEEK:[x]:NUMBER',
    WHAT CHAR(32) field_format='WEEK:[x]:EXPENSE:[x]:WHAT',
    AMOUNT DOUBLE(8,2) field_format='WEEK:[x]:EXPENSE:[x]:AMOUNT')
    ENGINE=CONNECT table_type=JSON File_name='expense.json';
    CREATE TABLE jexpw (
    WHO CHAR(12) NOT NULL,
    WEEK INT(2) NOT NULL jpath='$.WEEK[*].NUMBER',
    WHAT CHAR(32) NOT NULL jpath='$.WEEK[].EXPENSE[", "].WHAT',
    SUM DOUBLE(8,2) NOT NULL jpath='$.WEEK[].EXPENSE[+].AMOUNT',
    AVERAGE DOUBLE(8,2) NOT NULL jpath='$.WEEK[].EXPENSE[!].AMOUNT')
    ENGINE=CONNECT table_type=JSON File_name='expense.json';
    CREATE TABLE jexpw (
    WHO CHAR(12) NOT NULL,
    WEEK INT(2) NOT NULL field_format='$.WEEK[*].NUMBER',
    WHAT CHAR(32) NOT NULL field_format='$.WEEK[].EXPENSE[", "].WHAT',
    SUM DOUBLE(8,2) NOT NULL field_format='$.WEEK[].EXPENSE[+].AMOUNT',
    AVERAGE DOUBLE(8,2) NOT NULL field_format='$.WEEK[].EXPENSE[!].AMOUNT')
    ENGINE=CONNECT table_type=JSON File_name='expense.json';
    CREATE TABLE jexpw (
    WHO CHAR(12) NOT NULL,
    WEEK INT(2) NOT NULL field_format='WEEK:[x]:NUMBER',
    WHAT CHAR(32) NOT NULL field_format='WEEK::EXPENSE:[", "]:WHAT',
    SUM DOUBLE(8,2) NOT NULL field_format='WEEK::EXPENSE:[+]:AMOUNT',
    AVERAGE DOUBLE(8,2) NOT NULL field_format='WEEK::EXPENSE:[!]:AMOUNT')
    ENGINE=CONNECT table_type=JSON File_name='expense.json';
    CREATE TABLE jexpz (
    WHO CHAR(12) NOT NULL,
    WEEKS CHAR(12) NOT NULL field_format='WEEK[", "].NUMBER',
    SUMS CHAR(64) NOT NULL field_format='WEEK["+"].EXPENSE[+].AMOUNT',
    SUM DOUBLE(8,2) NOT NULL field_format='WEEK[+].EXPENSE[+].AMOUNT',
    AVGS CHAR(64) NOT NULL field_format='WEEK["+"].EXPENSE[!].AMOUNT',
    SUMAVG DOUBLE(8,2) NOT NULL field_format='WEEK[+].EXPENSE[!].AMOUNT',
    AVGSUM DOUBLE(8,2) NOT NULL field_format='WEEK[!].EXPENSE[+].AMOUNT',
    AVERAGE DOUBLE(8,2) NOT NULL field_format='WEEK[!].EXPENSE[*].AMOUNT')
    ENGINE=CONNECT table_type=JSON File_name='expense.json';
    CREATE TABLE jexpz (
    WHO CHAR(12) NOT NULL,
    WEEKS CHAR(12) NOT NULL jpath='WEEK[", "].NUMBER',
    SUMS CHAR(64) NOT NULL jpath='WEEK["+"].EXPENSE[+].AMOUNT',
    SUM DOUBLE(8,2) NOT NULL jpath='WEEK[+].EXPENSE[+].AMOUNT',
    AVGS CHAR(64) NOT NULL jpath='WEEK["+"].EXPENSE[!].AMOUNT',
    SUMAVG DOUBLE(8,2) NOT NULL jpath='WEEK[+].EXPENSE[!].AMOUNT',
    AVGSUM DOUBLE(8,2) NOT NULL jpath='WEEK[!].EXPENSE[+].AMOUNT',
    AVERAGE DOUBLE(8,2) NOT NULL jpath='WEEK[!].EXPENSE[*].AMOUNT')
    ENGINE=CONNECT table_type=JSON File_name='expense.json';
    CREATE TABLE jexpz (
    WHO CHAR(12) NOT NULL,
    WEEKS CHAR(12) NOT NULL field_format='WEEK:[", "]:NUMBER',
    SUMS CHAR(64) NOT NULL field_format='WEEK:["+"]:EXPENSE:[+]:AMOUNT',
    SUM DOUBLE(8,2) NOT NULL field_format='WEEK:[+]:EXPENSE:[+]:AMOUNT',
    AVGS CHAR(64) NOT NULL field_format='WEEK:["+"]:EXPENSE:[!]:AMOUNT',
    SUMAVG DOUBLE(8,2) NOT NULL field_format='WEEK:[+]:EXPENSE:[!]:AMOUNT',
    AVGSUM DOUBLE(8,2) NOT NULL field_format='WEEK:[!]:EXPENSE:[+]:AMOUNT',
    AVERAGE DOUBLE(8,2) NOT NULL field_format='WEEK:[!]:EXPENSE:[x]:AMOUNT')
    ENGINE=CONNECT table_type=JSON
    File_name='E:/Data/Json/expense2.json';
    SET connect_json_null='NULL';
    SET connect_json_null=NULL;
    CREATE TABLE jsample ENGINE=CONNECT table_type=JSON file_name='biblio3.json';
    CREATE TABLE `jsample` (
      `ISBN` CHAR(13) NOT NULL,
      `LANG` CHAR(2) NOT NULL,
      `SUBJECT` CHAR(12) NOT NULL,
      `AUTHOR` VARCHAR(256) DEFAULT NULL,
      `TITLE` CHAR(30) NOT NULL,
      `TRANSLATED` VARCHAR(256) DEFAULT NULL,
      `PUBLISHER` VARCHAR(256) DEFAULT NULL,
      `DATEPUB` INT(4) NOT NULL
    ) ENGINE=CONNECT DEFAULT CHARSET=latin1 `TABLE_TYPE`='JSON' `FILE_NAME`='biblio3.json';
    CREATE TABLE jsampall2 ENGINE=CONNECT table_type=JSON 
      file_name='biblio3.json' option_list='level=1';
    CREATE TABLE `jsampall2` (
      `ISBN` CHAR(13) NOT NULL,
      `LANG` CHAR(2) NOT NULL,
      `SUBJECT` CHAR(12) NOT NULL,
      `AUTHOR_FIRSTNAME` CHAR(15) NOT NULL `JPATH`='$.AUTHOR.[0].FIRSTNAME',
      `AUTHOR_LASTNAME` CHAR(8) NOT NULL `JPATH`='$.AUTHOR.[0].LASTNAME',
      `TITLE` CHAR(30) NOT NULL,
      `TRANSLATED_PREFIX` CHAR(23) DEFAULT NULL `JPATH`='$.TRANSLATED.PREFIX',
      `TRANSLATED_TRANSLATOR` VARCHAR(256) DEFAULT NULL `JPATH`='$.TRANSLATED.TRANSLATOR',
      `PUBLISHER_NAME` CHAR(15) NOT NULL `JPATH`='$.PUBLISHER.NAME',
      `PUBLISHER_PLACE` CHAR(5) NOT NULL `JPATH`='$.PUBLISHER.PLACE',
      `DATEPUB` INT(4) NOT NULL
    ) ENGINE=CONNECT DEFAULT CHARSET=latin1 `TABLE_TYPE`='JSON' 
      `FILE_NAME`='biblio3.json' `OPTION_LIST`='depth=1';
    CREATE TABLE `jsampall2` (
      `ISBN` CHAR(13) NOT NULL,
      `LANG` CHAR(2) NOT NULL,
      `SUBJECT` CHAR(12) NOT NULL,
      `AUTHOR_FIRSTNAME` CHAR(15) NOT NULL `FIELD_FORMAT`='AUTHOR..FIRSTNAME',
      `AUTHOR_LASTNAME` CHAR(8) NOT NULL `FIELD_FORMAT`='AUTHOR..LASTNAME',
      `TITLE` CHAR(30) NOT NULL,
      `TRANSLATED_PREFIX` CHAR(23) DEFAULT NULL `FIELD_FORMAT`='TRANSLATED.PREFIX',
      `TRANSLATED_TRANSLATOR` VARCHAR(256) DEFAULT NULL `FIELD_FORMAT`='TRANSLATED.TRANSLATOR',
      `PUBLISHER_NAME` CHAR(15) NOT NULL `FIELD_FORMAT`='PUBLISHER.NAME',
      `PUBLISHER_PLACE` CHAR(5) NOT NULL `FIELD_FORMAT`='PUBLISHER.PLACE',
      `DATEPUB` INT(4) NOT NULL
    ) ENGINE=CONNECT DEFAULT CHARSET=latin1 `TABLE_TYPE`='JSON' 
      `FILE_NAME`='biblio3.json' `OPTION_LIST`='level=1';
    CREATE TABLE `jsampall2` (
      `ISBN` CHAR(13) NOT NULL,
      `LANG` CHAR(2) NOT NULL,
      `SUBJECT` CHAR(12) NOT NULL,
      `AUTHOR_FIRSTNAME` CHAR(15) NOT NULL `FIELD_FORMAT`='AUTHOR::FIRSTNAME',
      `AUTHOR_LASTNAME` CHAR(8) NOT NULL `FIELD_FORMAT`='AUTHOR::LASTNAME',
      `TITLE` CHAR(30) NOT NULL,
      `TRANSLATED_PREFIX` CHAR(23) DEFAULT NULL `FIELD_FORMAT`='TRANSLATED:PREFIX',
      `TRANSLATED_TRANSLATOR` VARCHAR(256) DEFAULT NULL `FIELD_FORMAT`='TRANSLATED:TRANSLATOR',
      `PUBLISHER_NAME` CHAR(15) NOT NULL `FIELD_FORMAT`='PUBLISHER:NAME',
      `PUBLISHER_PLACE` CHAR(5) NOT NULL `FIELD_FORMAT`='PUBLISHER:PLACE',
      `DATEPUB` INT(4) NOT NULL
    ) ENGINE=CONNECT DEFAULT CHARSET=latin1 `TABLE_TYPE`='JSON' `
      FILE_NAME`='biblio3.json' `OPTION_LIST`='level=1';
    {"_id":1,"name":"Corner Social","cuisine":"American","grades":[{"grade":"A","score":6}]}
    {"_id":2,"name":"La Nueva Clasica Antillana","cuisine":"Spanish","grades":[]}
    CREATE TABLE sjr0
    ENGINE=CONNECT table_type=JSON file_name='sresto.json'
    option_list='Pretty=0,Depth=1' lrecl=128;
    CREATE TABLE `sjr0` (
      `_id` BIGINT(1) NOT NULL,
      `name` CHAR(26) NOT NULL,
      `cuisine` CHAR(8) NOT NULL,
      `grades` CHAR(1) DEFAULT NULL,
      `grades_grade` CHAR(1) DEFAULT NULL `JPATH`='$.grades[0].grade',
      `grades_score` BIGINT(1) DEFAULT NULL `JPATH`='$.grades[0].score'
    ) ENGINE=CONNECT DEFAULT CHARSET=latin1 `TABLE_TYPE`='JSON'
      `FILE_NAME`='sresto.json' 
      `OPTION_LIST`='Pretty=0,Depth=1,Accept=1' `LRECL`=128;
    CREATE TABLE bibcol ENGINE=CONNECT table_type=JSON file_name='biblio3.json' 
      option_list='level=2' catfunc=columns;
    SELECT COLUMN_NAME, type_name TYPE, column_size SIZE, jpath FROM bibcol;
    {
       "data": [
          {
             "id": "X999_Y999",
             "from": {
                "name": "Tom Brady", "id": "X12"
             },
             "message": "Looking forward to 2010!",
             "actions": [
                {
                   "name": "Comment",
                   "link": "http://www.facebook.com/X999/posts/Y999"
                },
                {
                   "name": "Like",
                   "link": "http://www.facebook.com/X999/posts/Y999"
                }
             ],
             "type": "status",
             "created_time": "2010-08-02T21:27:44+0000",
             "updated_time": "2010-08-02T21:27:44+0000"
          },
          {
             "id": "X998_Y998",
             "from": {
                "name": "Peyton Manning", "id": "X18"
             },
             "message": "Where's my contract?",
             "actions": [
                {
                   "name": "Comment",
                   "link": "http://www.facebook.com/X998/posts/Y998"
                },
                {
                   "name": "Like",
                   "link": "http://www.facebook.com/X998/posts/Y998"
                }
             ],
             "type": "status",
             "created_time": "2010-08-02T21:27:44+0000",
             "updated_time": "2010-08-02T21:27:44+0000"
          }
       ]
    }
    CREATE TABLE jfacebook (
    `ID` CHAR(10) jpath='id',
    `Name` CHAR(32) jpath='from.name',
    `MyID` CHAR(16) jpath='from.id',
    `Message` VARCHAR(256) jpath='message',
    `Action` CHAR(16) jpath='actions..name',
    `Link` VARCHAR(256) jpath='actions..link',
    `Type` CHAR(16) jpath='type',
    `Created` DATETIME date_format='YYYY-MM-DD\'T\'hh:mm:ss' jpath='created_time',
    `Updated` DATETIME date_format='YYYY-MM-DD\'T\'hh:mm:ss' jpath='updated_time')
    ENGINE=CONNECT table_type=JSON file_name='facebook.json' option_list='Object=data,Expand=actions';
    CREATE TABLE jfacebook (
    `ID` CHAR(10) field_format='id',
    `Name` CHAR(32) field_format='from.name',
    `MyID` CHAR(16) field_format='from.id',
    `Message` VARCHAR(256) field_format='message',
    `Action` CHAR(16) field_format='actions..name',
    `Link` VARCHAR(256) field_format='actions..link',
    `Type` CHAR(16) field_format='type',
    `Created` DATETIME date_format='YYYY-MM-DD\'T\'hh:mm:ss' field_format='created_time',
    `Updated` DATETIME date_format='YYYY-MM-DD\'T\'hh:mm:ss' field_format='updated_time')
    ENGINE=CONNECT table_type=JSON file_name='facebook.json' option_list='Object=data,Expand=actions';
    CREATE TABLE jfacebook (
    `ID` CHAR(10) field_format='id',
    `Name` CHAR(32) field_format='from:name',
    `MyID` CHAR(16) field_format='from:id',
    `Message` VARCHAR(256) field_format='message',
    `Action` CHAR(16) field_format='actions::name',
    `Link` VARCHAR(256) field_format='actions::link',
    `Type` CHAR(16) field_format='type',
    `Created` DATETIME date_format='YYYY-MM-DD\'T\'hh:mm:ss' field_format='created_time',
    `Updated` DATETIME date_format='YYYY-MM-DD\'T\'hh:mm:ss' field_format='updated_time')
    ENGINE=CONNECT table_type=JSON file_name='facebook.json' option_list='Object=data,Expand=actions';
    { "_id" : "01001", "city" : "AGAWAM", "loc" : [ -72.622739, 42.070206 ], "pop" : 15338, "state" : "MA" }
    { "_id" : "01002", "city" : "CUSHMAN", "loc" : [ -72.51564999999999, 42.377017 ], "pop" : 36963, "state" : "MA" }
    { "_id" : "01005", "city" : "BARRE", "loc" : [ -72.1083540000001, 42.409698 ], "pop" : 4546, "state" : "MA" }
    { "_id" : "01007", "city" : "BELCHERTOWN", "loc" : [ -72.4109530000001, 42.275103 ], "pop" : 10579, "state" : "MA" }
    …
    { "_id" : "99929", "city" : "WRANGELL", "loc" : [ -132.352918, 56.433524 ], "pop" : 2573, "state" : "AK" }
    { "_id" : "99950", "city" : "KETCHIKAN", "loc" : [ -133.18479, 55.942471 ], "pop" : 422, "state" : "AK" }
    CREATE TABLE cities (
    `_id` CHAR(5) KEY,
    `city` CHAR(32),
    `lat` DOUBLE(12,6) jpath='loc.0',
    `long` DOUBLE(12,6) jpath='loc.1',
    `pop` INT(8),
    `state` CHAR(2) distrib='clustered')
    ENGINE=CONNECT table_type=JSON file_name='cities.json' lrecl=128 option_list='pretty=0';
    CREATE TABLE cities (
    `_id` CHAR(5) KEY,
    `city` CHAR(32),
    `lat` DOUBLE(12,6) field_format='loc.0',
    `long` DOUBLE(12,6) field_format='loc.1',
    `pop` INT(8),
    `state` CHAR(2) distrib='clustered')
    ENGINE=CONNECT table_type=JSON file_name='cities.json' lrecl=128 option_list='pretty=0';
    CREATE TABLE cities (
    `_id` CHAR(5) KEY,
    `city` CHAR(32),
    `long` DOUBLE(12,6) field_format='loc:[0]',
    `lat` DOUBLE(12,6) field_format='loc:[1]',
    `pop` INT(8),
    `state` CHAR(2) distrib='clustered')
    ENGINE=CONNECT table_type=JSON file_name='cities.json' lrecl=128 option_list='pretty=0';
    [
      [56, "Coucou", 500.00],
      [[2,0,1,4], "Hello World", 2.0316],
      ["1784", "John Doo", 32.4500],
      [1914, ["Nabucho","donosor"], 5.12],
      [7, "sept", [0.77,1.22,2.01]],
      [8, "huit", 13.0]
    ]
    CREATE TABLE xjson (
    `a` INT(6) jpath='1',
    `b` CHAR(32) jpath='2',
    `c` DOUBLE(10,4) jpath='3')
    ENGINE=CONNECT table_type=JSON file_name='test.json' option_list='Pretty=1,Jmode=1,Base=1' lrecl=128;
    CREATE TABLE xjson (
    `a` INT(6) field_format='1',
    `b` CHAR(32) field_format='2',
    `c` DOUBLE(10,4) field_format='3')
    ENGINE=CONNECT table_type=JSON file_name='test.json' option_list='Pretty=1,Jmode=1,Base=1' lrecl=128;
    CREATE TABLE xjson (
    `a` INT(6) field_format='[1]',
    `b` CHAR(32) field_format='[2]',
    `c` DOUBLE(10,4) field_format='[3]')
    ENGINE=CONNECT table_type=JSON file_name='test.json'
    option_list='Pretty=1,Jmode=1,Base=1' lrecl=128;
    INSERT INTO xjson VALUES(25, 'Breakfast', 1.414);
    CREATE TABLE jsample2 (
    ISBN CHAR(15),
    Lng CHAR(2) jpath='LANG',
    json_Author CHAR(255) jpath='AUTHOR.*',
    Title CHAR(32) jpath='TITLE',
    YEAR INT(4) jpath='DATEPUB')
    ENGINE=CONNECT table_type=JSON file_name='biblio3.json';
    CREATE TABLE jsample2 (
    ISBN CHAR(15),
    Lng CHAR(2) field_format='LANG',
    json_Author CHAR(255) field_format='AUTHOR.*',
    Title CHAR(32) field_format='TITLE',
    YEAR INT(4) field_format='DATEPUB')
    ENGINE=CONNECT table_type=JSON file_name='biblio3.json';
    CREATE TABLE jsample2 (
    ISBN CHAR(15),
    Lng CHAR(2) field_format='LANG',
    json_Author CHAR(255) field_format='AUTHOR:*',
    Title CHAR(32) field_format='TITLE',
    YEAR INT(4) field_format='DATEPUB')
    ENGINE=CONNECT table_type=JSON file_name='biblio3.json';
    SELECT json_Author FROM jsample2;
    UPDATE jsampex SET authorfn = 'John' WHERE authorln = 'Knab';
    UPDATE jsampex SET authorfn = 'John' WHERE isbn = '9782212090819';
    UPDATE jsampex ADD authorfn = 'Charles', authorln = 'Dickens'
    WHERE title = 'XML en Action';
    CREATE TABLE jauthor (
    FIRSTNAME CHAR(64),
    LASTNAME CHAR(64))
    ENGINE=CONNECT table_type=JSON File_name='biblio3.json' option_list='Object=1.AUTHOR';
    CREATE TABLE jauthor (
    FIRSTNAME CHAR(64),
    LASTNAME CHAR(64))
    ENGINE=CONNECT table_type=JSON File_name='biblio3.json' option_list='Object=[1]:AUTHOR';
    SELECT * FROM jauthor;
    INSERT INTO jauthor VALUES('Charles','Dickens');
    SELECT * FROM jsampex;
    UPDATE jsample2 SET json_Author =
    '[{"FIRSTNAME":"William J.","LASTNAME":"Pardi"},
      {"FIRSTNAME":"Charles","LASTNAME":"Dickens"}]'
    WHERE isbn = '9782840825685';
    CREATE FUNCTION jsonvalue RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_make_array RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_array_add_values RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_array_add RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_array_delete RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_make_object RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_nonull RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_key RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_add RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_delete RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_list RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_values RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonset_grp_size RETURNS INTEGER soname 'ha_connect.so';
    CREATE FUNCTION jsonget_grp_size RETURNS INTEGER soname 'ha_connect.so';
    CREATE AGGREGATE FUNCTION json_array_grp RETURNS STRING soname 'ha_connect.so';
    CREATE AGGREGATE FUNCTION json_object_grp RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonlocate RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_locate_all RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsoncontains RETURNS INTEGER soname 'ha_connect.so';
    CREATE FUNCTION jsoncontains_path RETURNS INTEGER soname 'ha_connect.so';
    CREATE FUNCTION json_item_merge RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_get_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonget_string RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonget_int RETURNS INTEGER soname 'ha_connect.so';
    CREATE FUNCTION jsonget_real RETURNS REAL soname 'ha_connect.so';
    CREATE FUNCTION json_set_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_insert_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_update_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_file RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jfile_make RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jfile_convert RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jfile_bjson RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_serialize RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_array RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_array_add_values RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_array_add RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_array_delete RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_nonull RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_key RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_add RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_delete RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_list RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_item_merge RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_get_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_set_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_insert_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_update_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_file RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonvalue RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_make_array RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_array_add_values RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_array_add RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_array_delete RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_make_object RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_nonull RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_key RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_add RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_delete RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_list RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonset_grp_size RETURNS INTEGER soname 'ha_connect.so';
    CREATE FUNCTION jsonget_grp_size RETURNS INTEGER soname 'ha_connect.so';
    CREATE AGGREGATE FUNCTION json_array_grp RETURNS STRING soname 'ha_connect.so';
    CREATE AGGREGATE FUNCTION json_object_grp RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonlocate RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_locate_all RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsoncontains RETURNS INTEGER soname 'ha_connect.so';
    CREATE FUNCTION jsoncontains_path RETURNS INTEGER soname 'ha_connect.so';
    CREATE FUNCTION json_item_merge RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_get_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonget_string RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonget_int RETURNS INTEGER soname 'ha_connect.so';
    CREATE FUNCTION jsonget_real RETURNS REAL soname 'ha_connect.so';
    CREATE FUNCTION json_set_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_insert_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_update_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_file RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jfile_make RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_serialize RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_array RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_array_add_values RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_array_add RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_array_delete RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_nonull RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_key RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_add RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_delete RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_list RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_item_merge RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_get_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_set_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_insert_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_update_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_file RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonvalue RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_array RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_array_add_values RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_array_add RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_array_delete RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_nonull RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_key RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_add RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_delete RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_object_list RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonset_grp_size RETURNS INTEGER soname 'ha_connect.so';
    CREATE FUNCTION jsonget_grp_size RETURNS INTEGER soname 'ha_connect.so';
    CREATE AGGREGATE FUNCTION json_array_grp RETURNS STRING soname 'ha_connect.so';
    CREATE AGGREGATE FUNCTION json_object_grp RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonlocate RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_locate_all RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsoncontains RETURNS INTEGER soname 'ha_connect.so';
    CREATE FUNCTION jsoncontains_path RETURNS INTEGER soname 'ha_connect.so';
    CREATE FUNCTION json_item_merge RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_get_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonget_string RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonget_int RETURNS INTEGER soname 'ha_connect.so';
    CREATE FUNCTION jsonget_real RETURNS REAL soname 'ha_connect.so';
    CREATE FUNCTION json_set_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_insert_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_update_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_file RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jfile_make RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION json_serialize RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_array RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_array_add_values RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_array_add RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_array_delete RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_nonull RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_key RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_add RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_delete RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_object_list RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_item_merge RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_get_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_set_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_insert_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_update_item RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jbin_file RETURNS STRING soname 'ha_connect.so';
    CREATE FUNCTION jsonvalue RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_make_array RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_array_add_values RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_array_add RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_array_delete RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_make_object RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_nonull RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_key RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_add RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_delete RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_list RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_values RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsonset_grp_size RETURNS INTEGER soname 'ha_connect';
    CREATE FUNCTION jsonget_grp_size RETURNS INTEGER soname 'ha_connect';
    CREATE AGGREGATE FUNCTION json_array_grp RETURNS STRING soname 'ha_connect';
    CREATE AGGREGATE FUNCTION json_object_grp RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsonlocate RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_locate_all RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsoncontains RETURNS INTEGER soname 'ha_connect';
    CREATE FUNCTION jsoncontains_path RETURNS INTEGER soname 'ha_connect';
    CREATE FUNCTION json_item_merge RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_get_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsonget_string RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsonget_int RETURNS INTEGER soname 'ha_connect';
    CREATE FUNCTION jsonget_real RETURNS REAL soname 'ha_connect';
    CREATE FUNCTION json_set_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_insert_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_update_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_file RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jfile_make RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jfile_convert RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jfile_bjson RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_serialize RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_array RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_array_add_values RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_array_add RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_array_delete RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_nonull RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_key RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_add RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_delete RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_list RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_item_merge RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_get_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_set_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_insert_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_update_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_file RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsonvalue RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_make_array RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_array_add_values RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_array_add RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_array_delete RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_make_object RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_nonull RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_key RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_add RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_delete RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_list RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsonset_grp_size RETURNS INTEGER soname 'ha_connect';
    CREATE FUNCTION jsonget_grp_size RETURNS INTEGER soname 'ha_connect';
    CREATE AGGREGATE FUNCTION json_array_grp RETURNS STRING soname 'ha_connect';
    CREATE AGGREGATE FUNCTION json_object_grp RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsonlocate RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_locate_all RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsoncontains RETURNS INTEGER soname 'ha_connect';
    CREATE FUNCTION jsoncontains_path RETURNS INTEGER soname 'ha_connect';
    CREATE FUNCTION json_item_merge RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_get_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsonget_string RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsonget_int RETURNS INTEGER soname 'ha_connect';
    CREATE FUNCTION jsonget_real RETURNS REAL soname 'ha_connect';
    CREATE FUNCTION json_set_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_insert_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_update_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_file RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jfile_make RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_serialize RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_array RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_array_add_values RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_array_add RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_array_delete RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_nonull RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_key RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_add RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_delete RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_list RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_item_merge RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_get_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_set_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_insert_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_update_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_file RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsonvalue RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_array RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_array_add_values RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_array_add RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_array_delete RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_nonull RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_key RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_add RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_delete RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_object_list RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsonset_grp_size RETURNS INTEGER soname 'ha_connect';
    CREATE FUNCTION jsonget_grp_size RETURNS INTEGER soname 'ha_connect';
    CREATE AGGREGATE FUNCTION json_array_grp RETURNS STRING soname 'ha_connect';
    CREATE AGGREGATE FUNCTION json_object_grp RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsonlocate RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_locate_all RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsoncontains RETURNS INTEGER soname 'ha_connect';
    CREATE FUNCTION jsoncontains_path RETURNS INTEGER soname 'ha_connect';
    CREATE FUNCTION json_item_merge RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_get_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsonget_string RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jsonget_int RETURNS INTEGER soname 'ha_connect';
    CREATE FUNCTION jsonget_real RETURNS REAL soname 'ha_connect';
    CREATE FUNCTION json_set_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_insert_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_update_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_file RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jfile_make RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION json_serialize RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_array RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_array_add_values RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_array_add RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_array_delete RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_nonull RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_key RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_add RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_delete RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_object_list RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_item_merge RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_get_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_set_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_insert_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_update_item RETURNS STRING soname 'ha_connect';
    CREATE FUNCTION jbin_file RETURNS STRING soname 'ha_connect';
    Jfile_Bjson(in_file_name, out_file_name, lrecl)
    Jfile_Convert(in_file_name, out_file_name, lrecl)
    Jfile_Make(arg1, arg2, [arg3], …)
    SELECT Jfile_Make('tb.json' jfile_, 2);
    [
      {
        "_id": 5,
        "type": "food",
        "ratings": [
          5,
          8,
          9
        ]
      },
      {
        "_id": 6,
        "type": "car",
        "ratings": [
          5,
          9
        ]
      }
    ]
    Json_Array_Add(arg1, arg2, [arg3][, arg4][, ...])
    SELECT Json_Array_Add(Json_Array(56,3.1416,'machin',NULL),
    'One more') ARRAY;
    UPDATE jsample2 SET 
      json_author = json_array_add(json_author, json_object('Charles' FIRSTNAME, 'Dickens' LASTNAME)) 
      WHERE isbn = '9782840825685';
    SELECT Json_Array_Add('[5,3,8,7,9]' json_, 4, 2) ARRAY;
    SELECT Json_Array_Add('{"a":1,"b":2,"c":[3,4]}' json_, 5, 1, 'c');
    Json_Array_Add_Values(arg, arglist)
    SELECT Json_Array_Add_Values
      (Json_Array(56, 3.1416, 'machin', NULL), 'One more', 'Two more') ARRAY;
    Json_Array_Delete(arg1, arg2 [,arg3] [...])
    SELECT Json_Array_Delete(Json_Array(56,3.1416,'foo',NULL),1) ARRAY;
    UPDATE jsample2 SET json_author = json_array_delete(json_author, 1) 
      WHERE isbn = '9782840825685';
    Json_Array_Grp(arg)
    SELECT name, json_array_grp(race) FROM pet GROUP BY name;
    JsonContains(json_doc, item [, int])<
    JsonContains_Path(json_doc, path)
    Json_File(arg1, [arg2, [arg3]], …)
    { "_id" : 5, "type" : "food", "ratings" : [ 5, 8, 9 ] }
    { "_id" : 6, "type" : "car", "ratings" : [ 5, 9 ] }
    SELECT JsonGet_String(Json_File('tb.json', 0), '$[1].type') "Type";
    Json_Get_Item(arg1, arg2, …)
    SELECT Json_Get_Item(Json_Object('foo' AS "first", Json_Array('a', 33) 
      AS "json_second"), 'second') AS "item";
    JsonGet_Grp_Size(val)
    JsonGet_String(arg1, arg2, [arg3] …)
    JsonGet_Int(arg1, arg2, [arg3] …)
    JsonGet_Real(arg1, arg2, [arg3] …)
    SELECT 
    JsonGet_String('{"qty":7,"price":29.50,"garanty":null}','price') "String",
    JsonGet_Int('{"qty":7,"price":29.50,"garanty":null}','price') "Int",
    JsonGet_Real('{"qty":7,"price":29.50,"garanty":null}','price') "Real";
    SELECT 
    JsonGet_Real('{"qty":7,"price":29.50,"garanty":null}','price',4) "Real";
    SELECT 
    JsonGet_Int(Json_Array(45,28,36,45,89), '[4]') "Rank",
    JsonGet_Int(Json_Array(45,28,36,45,89), '[#]') "Number",
    JsonGet_String(Json_Array(45,28,36,45,89), '[","]') "Concat",
    JsonGet_Int(Json_Array(45,28,36,45,89), '[+]') "Sum",
    JsonGet_Real(Json_Array(45,28,36,45,89), '[!]', 2) "Avg";
    Json_Item_Merge(arg1, arg2, …)
    SELECT Json_Item_Merge(Json_Array('a','b','c'), Json_Array('d','e','f')) AS "Result";
    SELECT Json_Item_Merge(Json_Object(1 "a", 2 "b", 3 "c"), Json_Object(4 "d",5 "b",6 "f")) 
      AS "Result";
    JsonLocate(arg1, arg2, [arg3], …):
    SELECT JsonLocate('{"AUTHORS":[{"FN":"Jules", "LN":"Verne"}, 
      {"FN":"Jack", "LN":"London"}]}' json_, 'Jack') PATH;
    SELECT 
    JsonLocate('[45,28,[36,45],89]',45) FIRST,
    JsonLocate('[45,28,[36,45],89]',45,2) SECOND,
    JsonLocate('[45,28,[36,45],89]',45.0) `wrong type`,
    JsonLocate('[45,28,[36,45],89]','[36,45]' json_) JSON;
    SELECT JsonLocate('{"AUTHORS":[{"FN":"Jules", "LN":"Verne"}, 
      {"FN":"Jack", "LN":"London"}]}' json_, 'VERNE' ci) PATH;
    Json_Locate_All(arg1, arg2, [arg3], …):
    SELECT Json_Locate_All('[[45,28],[[36,45],89]]',45);
    SELECT JsonGet_Int(Json_Locate_All('[[45,28],[[36,45],89]]',45), '$[#]') "Nb of occurs";
    Json_Make_Array(val1, …, valn)
    SELECT Json_Make_Array(56, 3.1416, 'My name is "Foo"', NULL);
    Json_Make_Object(arg1, …, argn)
    SELECT Json_Make_Object(56, 3.1416, 'machin', NULL);
    SELECT Json_Make_Object(56 qty, 3.1416 price, 'machin' truc, NULL garanty);
    SELECT Json_Make_Object(matricule, nom, titre, salaire) FROM connect.employe WHERE nom = 'PANTIER';
    Json_Object_Add(arg1, arg2, [arg3] …)
    SELECT Json_Object_Add
      ('{"item":"T-shirt","qty":27,"price":24.99}' json_old,'blue' color) newobj;
    Json_Object_Delete(arg1, arg2, [arg3] …):
    SELECT Json_Object_Delete('{"item":"T-shirt","qty":27,"price":24.99}' json_old, 'qty') newobj;
    Json_Object_Grp(arg1,arg2)
    SELECT name, json_object_grp(NUMBER,race) FROM pet GROUP BY name;
    Json_Object_Key([key1, val1 [, …, keyn, valn]])
    SELECT Json_Object_Key('qty', 56, 'price', 3.1416, 'truc', 'machin', 'garanty', NULL);
    Json_Object_List(arg1, …):
    SELECT Json_Object_List(Json_Object(56 qty,3.1416 price,'machin' truc, NULL garanty))
      "Key List";
    Json_Object_Nonull(arg1, …, argn)
    Json_Object_Values(json_object)
    SELECT Json_Object_Values('{"One":1,"Two":2,"Three":3}') "Value List";
    JsonSet_Grp_Size(val)
    Json_{Set | Insert | Update}_Item(json_doc, [item, path [, val, path …]])
    SET @j = Json_Array(1, 2, 3, Json_Object_Key('quatre', 4));
    SELECT Json_Set_Item(@j, 'foo', '$[1]', 5, '$[3].cinq') AS "Set",
    Json_Insert_Item(@j, 'foo', '$[1]', 5, '$[3].cinq') AS "Insert",
    Json_Update_Item(@j, 'foo', '$[1]', 5, '$[3].cinq') AS "Update";
    JsonValue (val)
    SELECT JsonValue(3.1416);
    SELECT Json_Object(Jbin_Array_Add(Jbin_Array('a','b','c'), 'd') AS "Jbin_foo") AS "Result";
    SELECT Jbin_Array('a','b','c');
    SELECT Json_Serialize(Jbin_Array('a','b','c'));
    SELECT Jfile_Make('{"a":1, "b":[44, 55]}' json_, 'test.json');
    SELECT Json_Array_Add(Json_File('test.json', 'b'), 66);
    SELECT Json_Array_Add(Json_File('test.json'), 66, 'b');
    SELECT Json_Array_Add(Jbin_File('test.json', 'b'), 66);
    SELECT Json_File('test.json', 3);
    CREATE TABLE tb (
    n INT KEY,
    jfile_cols CHAR(10) NOT NULL);
    INSERT INTO tb VALUES(1,'test.json');
    UPDATE tb SET jfile_cols = SELECT Json_Array_Add(Jbin_File('test.json', 'b'), 66)
    WHERE n = 1;
    SELECT JsonGet_String(jfile_cols, '[1]:*') FROM tb;
    SELECT Json_Object(Jbin_Object_Add(Jbin_File('bt2.json'), 4 AS "d") AS "Jbin_bt1")
      AS "Result";
    SELECT Json_Object(Json_Object_Add(Jbin_File('bt2.json'), 4 AS "d") AS "Jfile_bt1")
      AS "Result";
    create table assets (
       item_name varchar(32) primary key, /* A common attribute for all items */
       dynamic_cols  blob  /* Dynamic columns are stored here */
     );
    
    INSERT INTO assets VALUES
       ('MariaDB T-shirt', COLUMN_CREATE('color', 'blue', 'size', 'XL'));
    
    INSERT INTO assets VALUES
       ('Thinkpad Laptop', COLUMN_CREATE('color', 'black', 'price', 500));
    
    SELECT item_name, COLUMN_GET(dynamic_cols, 'color' as char) AS color FROM assets;
    +-----------------+-------+
    | item_name       | color |
    +-----------------+-------+
    | MariaDB T-shirt | blue  |
    | Thinkpad Laptop | black |
    +-----------------+-------+
    UPDATE assets SET dynamic_cols=COLUMN_DELETE(dynamic_cols, "price")
      WHERE COLUMN_GET(dynamic_cols, 'color' AS CHAR)='black';
    UPDATE assets SET dynamic_cols=COLUMN_ADD(dynamic_cols, 'warranty', '3 years')
       WHERE item_name='Thinkpad Laptop';
    SELECT item_name, column_list(dynamic_cols) FROM assets;
    +-----------------+---------------------------+
    | item_name       | column_list(dynamic_cols) |
    +-----------------+---------------------------+
    | MariaDB T-shirt | `size`,`color`            |
    | Thinkpad Laptop | `color`,`warranty`        |
    +-----------------+---------------------------+
    
    SELECT item_name, COLUMN_JSON(dynamic_cols) FROM assets;
    +-----------------+----------------------------------------+
    | item_name       | COLUMN_JSON(dynamic_cols)              |
    +-----------------+----------------------------------------+
    | MariaDB T-shirt | {"size":"XL","color":"blue"}           |
    | Thinkpad Laptop | {"color":"black","warranty":"3 years"} |
    +-----------------+----------------------------------------+
    create table jassets (
       item_name varchar(32) primary key, /* A common attribute for all items */
       json_cols varchar(512)  /* Jason columns are stored here */
     );
    
    INSERT INTO jassets VALUES
       ('MariaDB T-shirt', Json_Object('blue' color, 'XL' size));
    
    INSERT INTO jassets VALUES
       ('Thinkpad Laptop', Json_Object('black' color, 500 price));
    
    SELECT item_name, JsonGet_String(json_cols, 'color') AS color FROM jassets;
    +-----------------+-------+
    | item_name       | color |
    +-----------------+-------+
    | MariaDB T-shirt | blue  |
    | Thinkpad Laptop | black |
    +-----------------+-------+
    UPDATE jassets SET json_cols=Json_Object_Delete(json_cols, 'price')
     WHERE JsonGet_String(json_cols, 'color')='black';
    UPDATE jassets SET json_cols=Json_Object_Add(json_cols, '3 years' warranty)
     WHERE item_name='Thinkpad Laptop';
    SELECT item_name, Json_Object_List(json_cols) FROM jassets;
    +-----------------+-----------------------------+
    | item_name       | Json_Object_List(json_cols) |
    +-----------------+-----------------------------+
    | MariaDB T-shirt | ["color","size"]            |
    | Thinkpad Laptop | ["color","warranty"]        |
    +-----------------+-----------------------------+
    
    SELECT item_name, json_cols FROM jassets;
    +-----------------+----------------------------------------+
    | item_name       | json_cols                              |
    +-----------------+----------------------------------------+
    | MariaDB T-shirt | {"color":"blue","size":"XL"}           |
    | Thinkpad Laptop | {"color":"black","warranty":"3 years"} |
    +-----------------+----------------------------------------+
    CREATE TABLE xj1 (ROW VARCHAR(500) jpath='*') ENGINE=CONNECT table_type=JSON file_name='biblio3.json' option_list='jmode=2';
    CREATE TABLE xj1 (ROW VARCHAR(500) field_format='*') 
     ENGINE=CONNECT table_type=JSON file_name='biblio3.json' option_list='jmode=2';
    INSERT INTO xj1
      SELECT json_object_nonull(ISBN, LANGUAGE LANG, SUBJECT, 
        json_array_grp(json_object(authorfn FIRSTNAME, authorln LASTNAME)) json_AUTHOR, TITLE,
        json_object(translated PREFIX, json_object(tranfn FIRSTNAME, tranln LASTNAME) json_TRANSLATOR) 
        json_TRANSLATED, json_object(publisher NAME, LOCATION PLACE) json_PUBLISHER, DATE DATEPUB) 
    FROM xsampall2 GROUP BY isbn;
    CREATE TABLE jsampall3 (
    ISBN CHAR(15),
    LANGUAGE CHAR(2) jpath='LANG',
    SUBJECT CHAR(32),
    AUTHORFN CHAR(128) jpath='AUTHOR:[X]:FIRSTNAME',
    AUTHORLN CHAR(128) jpath='AUTHOR:[X]:LASTNAME',
    TITLE CHAR(32),
    TRANSLATED CHAR(32) jpath='TRANSLATOR:PREFIX',
    TRANSLATORFN CHAR(128) jpath='TRANSLATOR:FIRSTNAME',
    TRANSLATORLN CHAR(128) jpath='TRANSLATOR:LASTNAME',
    PUBLISHER CHAR(20) jpath='PUBLISHER:NAME',
    LOCATION CHAR(20) jpath='PUBLISHER:PLACE',
    DATE INT(4) jpath='DATEPUB')
    ENGINE=CONNECT table_type=JSON file_name='biblio3.json';
    CREATE TABLE jsampall3 (
    ISBN CHAR(15),
    LANGUAGE CHAR(2) field_format='LANG',
    SUBJECT CHAR(32),
    AUTHORFN CHAR(128) field_format='AUTHOR:[X]:FIRSTNAME',
    AUTHORLN CHAR(128) field_format='AUTHOR:[X]:LASTNAME',
    TITLE CHAR(32),
    TRANSLATED CHAR(32) field_format='TRANSLATOR:PREFIX',
    TRANSLATORFN CHAR(128) field_format='TRANSLATOR:FIRSTNAME',
    TRANSLATORLN CHAR(128) field_format='TRANSLATOR:LASTNAME',
    PUBLISHER CHAR(20) field_format='PUBLISHER:NAME',
    LOCATION CHAR(20) field_format='PUBLISHER:PLACE',
    DATE INT(4) field_format='DATEPUB')
    ENGINE=CONNECT table_type=JSON file_name='biblio3.json';
    INSERT INTO jsampall3 SELECT * FROM xsampall;
    CREATE TABLE xj2 (ISBN CHAR(15), author VARCHAR(150) jpath='AUTHOR:*') ENGINE=CONNECT table_type=JSON file_name='biblio3.json' option_list='jmode=1';
    CREATE TABLE xj2 (ISBN CHAR(15), author VARCHAR(150) field_format='AUTHOR:*') 
      ENGINE=CONNECT table_type=JSON file_name='biblio3.json' option_list='jmode=1';
    UPDATE xj2 SET author =
    (SELECT json_array_grp(json_object(authorfn FIRSTNAME, authorln LASTNAME)) 
      FROM xsampall2 WHERE isbn = xj2.isbn);
    Jfile_Make(json_document, [file_name], [pretty]);
    SELECT Jfile_Make(Jbin_File('tb.json'), 0);
    SELECT jfile_convert('bibdoc.json','bibdoc0.json',350);
    SELECT jfile_bjson('bigfile.json','binfile.json',3500);
    CREATE OR REPLACE TABLE jinvent (
    _id CHAR(24) NOT NULL, 
    item CHAR(12) NOT NULL,
    instock VARCHAR(300) NOT NULL jpath='instock.*')
    ENGINE=CONNECT table_type=JSON tabname='inventory' lrecl=512
    CONNECTION='mongodb://localhost:27017';
    CREATE OR REPLACE TABLE jinvent (
    _id CHAR(24) NOT NULL, 
    item CHAR(12) NOT NULL,
    instock VARCHAR(300) NOT NULL field_format='instock.*')
    ENGINE=CONNECT table_type=JSON tabname='inventory' lrecl=512
    CONNECTION='mongodb://localhost:27017';

    InnoDB System Variables

    This page lists the system variables available for configuring InnoDB's behavior, performance, buffers, and logs.

    This page documents system variables related to the InnoDB storage engine. For options that are not system variables, see InnoDB Options.

    See Server System Variables for a complete list of system variables and instructions on setting them.

    Also see the Full list of MariaDB options, system and status variables.

    have_innodb

    • Description: If the server supports , are set to YES, otherwise are set to NO. Removed in , use the table or instead.

    • Scope: Global

    • Dynamic: No

    • Removed:

    ignore_builtin_innodb

    • Description: Setting this to 1 results in the built-in InnoDB storage engine being ignored. In some versions of MariaDB, XtraDB is the default and is always present, so this variable is ignored and setting it results in a warning. From to , when InnoDB was the default instead of XtraDB, this variable needed to be set. Usually used in conjunction with the option to use the InnoDB plugin.

    • Command line: --ignore-builtin-innodb

    • Scope: Global

    innodb_adaptive_checkpoint

    • Description: Replaced with . Controls adaptive checkpointing. InnoDB's fuzzy checkpointing can cause stalls, as many dirty blocks are flushed at once as the checkpoint age nears the maximum. Adaptive checkpointing aims for more consistent flushing, approximately modified age / maximum checkpoint age. Can result in larger transaction log files

      • reflex Similar to flushing but flushes blocks constantly and contiguously based on the oldest modified age. If the age exceeds 1/2 of the maximum age capacity, flushing are weak contiguous. If the age exceeds 3/4, flushing are strong. Strength can be adjusted by the variable .

    innodb_adaptive_flushing

    • Description: If set to 1, the default, the server will dynamically adjust the flush rate of dirty pages in the . This assists to reduce brief bursts of I/O activity. If set to 0, adaptive flushing will only take place when the limit specified by is reached.

    • Command line: --innodb-adaptive-flushing={0|1}

    • Scope: Global

    innodb_adaptive_flushing_lwm

    • Description: Adaptive flushing is enabled when this low water mark percentage of the capacity is reached. Takes effect even if is disabled.

    • Command line: --innodb-adaptive-flushing-lwm=#

    • Scope: Global

    • Dynamic: Yes

    innodb_adaptive_flushing_method

    • Description: Determines the method of flushing dirty blocks from the InnoDB . If set to native or 0, the original InnoDB method is used. The maximum checkpoint age is determined by the total length of all transaction log files. When the checkpoint age reaches the maximum checkpoint age, blocks are flushed. This can cause lag if there are many updates per second and many blocks with an almost identical age need to be flushed. If set to estimate or 1, the default, the oldest modified age are compared with the maximum age capacity. If it's more than 1/4 of this age, blocks are flushed every second. The number of blocks flushed is determined by the number of modified blocks, the LSN progress speed and the average age of all modified blocks. It's therefore independent of the for the 1-second loop, but not entirely so for the 10-second loop. If set to keep_average or 2, designed specifically for SSD cards, a shorter loop cycle is used in an attempt to keep the I/O rate constant. Removed in /XtraDB 5.6 and replaced with InnoDB flushing method from MySQL 5.6.

    innodb_adaptive_hash_index

    • Description: If set to 1, the default until , the hash index is enabled. Based on performance testing (), the InnoDB adaptive hash index helps performance in mostly read-only workloads, and could slow down performance in other environments, especially , , , or operations.

    • Command line: --innodb-adaptive-hash-index={0|1}

    • Scope: Global

    innodb_adaptive_hash_index_partitions

    • Description: Specifies the number of partitions for use in adaptive searching. If set to 1, no extra partitions are created. XtraDB-only. From (which uses InnoDB as default instead of XtraDB), this is an alias for to allow for easier upgrades.

    • Command line: innodb-adaptive-hash-index-partitions=#

    • Scope: Global

    innodb_adaptive_hash_index_parts

    • Description: Specifies the number of partitions for use in adaptive searching. If set to 1, no extra partitions are created.

    • Command line: innodb-adaptive-hash-index-parts=#

    • Scope: Global

    • Dynamic: No

    innodb_adaptive_max_sleep_delay

    • Description: Maximum time in microseconds to automatically adjust the value to, based on the workload. Useful in extremely busy systems with hundreds of thousands of simultaneous connections. 0 disables any limit. Deprecated and ignored from .

    • Command line: --innodb-adaptive-max-sleep-delay=#

    • Scope: Global

    innodb_additional_mem_pool_size

    • Description: Size in bytes of the memory pool used for storing information about internal data structures. Defaults to 8MB, if your application has many tables and a large structure, and this is exceeded, operating system memory are allocated and warning messages written to the error log, in which case you should increase this value. Deprecated in and removed in along with InnoDB's internal memory allocator.

    • Command line: --innodb-additional-mem-pool-size=#

    • Scope: Global

    innodb_alter_copy_bulk

    • Description: Allow bulk insert operation for copy alter operation.

    • Scope: Global

    • Dynamic: Yes

    • Data Type: boolean

    innodb_api_bk_commit_interval

    • Description: Time in seconds between auto-commits for idle connections using the InnoDB memcached interface (not implemented in MariaDB).

    • Command line: --innodb-api-bk-commit-interval=#

    • Scope: Global

    • Dynamic: Yes

    innodb_api_disable_rowlock

    • Description: For use with MySQL's memcached (not implemented in MariaDB)

    • Command line: --innodb-api-disable-rowlock={0|1}

    • Scope: Global

    • Dynamic: No

    innodb_api_enable_binlog

    • Description: For use with MySQL's memcached (not implemented in MariaDB)

    • Command line: --innodb-api-enable-binlog={0|1}

    • Scope: Global

    • Dynamic: No

    innodb_api_enable_mdl

    • Description: For use with MySQL's memcached (not implemented in MariaDB)

    • Command line: --innodb-api-enable-mdl={0|1}

    • Scope: Global

    • Dynamic: No

    innodb_api_trx_level

    • Description: For use with MySQL's memcached (not implemented in MariaDB)

    • Command line: --innodb-api-trx-level=#

    • Scope: Global

    • Dynamic: Yes

    innodb_auto_lru_dump

    • Description: Renamed since XtraDB 5.5.10-20.1, which was in turn replaced by in .

    • Command line: --innodb-auto-lru-dump=#

    • Removed: XtraDB 5.5.10-20.1

    innodb_autoextend_increment

    • Description: Size in MB to increment an auto-extending shared tablespace file when it becomes full. If was set to 1, this setting does not apply to the resulting per-table tablespace files, which are automatically extended in their own way.

    • Command line: --innodb-autoextend-increment=#

    • Scope: Global

    innodb_autoinc_lock_mode

    • Description: The lock mode that is used when generating values for InnoDB tables.

      • Valid values are:

        • 0 is the traditional lock mode.

    innodb_background_scrub_data_check_interval

    • Description: Check if spaces needs scrubbing every seconds. See . Deprecated and ignored from .

    • Command line: --innodb-background-scrub-data-check-interval=#

    • Scope: Global

    • Dynamic: Yes

    innodb_background_scrub_data_compressed

    • Description: Enable scrubbing of compressed data by background threads (same as encryption_threads). See . Deprecated and ignored from .

    • Command line: --innodb-background-scrub-data-compressed={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_background_scrub_data_interval

    • Description: Scrub spaces that were last scrubbed longer than this number of seconds ago. See . Deprecated and ignored from .

    • Command line: --innodb-background-scrub-data-interval=#

    • Scope: Global

    • Dynamic: Yes

    innodb_background_scrub_data_uncompressed

    • Description: Enable scrubbing of uncompressed data by background threads (same as encryption_threads). See . Deprecated and ignored from .

    • Command line: --innodb-background-scrub-data-uncompressed={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_blocking_buffer_pool_restore

    • Description: If set to 1 (0 is default), XtraDB will wait until the least-recently used (LRU) dump is completely restored upon restart before reporting back to the server that it has successfully started up. Available with XtraDB only, not InnoDB.

    • Command line: innodb-blocking-buffer-pool-restore={0|1}

    • Scope: Global

    innodb_buf_dump_status_frequency

    • Description: Determines how often (as a percent) the buffer pool dump status should be printed in the logs. For example, 10 means that the buffer pool dump status is printed when every 10% of the number of buffer pool pages are dumped. The default is 0 (only start and end status is printed).

    • Command line: --innodb-buf-dump-status-frequency=#

    • Scope: Global

    innodb_buffer_pool_chunk_size

    • Description: Chunk size used for dynamically resizing the . Note that changing this setting can change the size of the buffer pool. When is used this value is effectively rounded up to the next multiple of . See . From , the variable is autosized based on the .

    • Command line: --innodb-buffer-pool-chunk-size=#

    • Scope: Global

    innodb_buffer_pool_dump_at_shutdown

    • Description: Whether to record pages cached in the on server shutdown, which reduces the length of the warmup the next time the server starts. The related specifies whether the buffer pool is automatically warmed up at startup.

    • Command line: --innodb-buffer-pool-dump-at-shutdown={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_buffer_pool_dump_now

    • Description: Immediately records pages stored in the . The related does the reverse, and will immediately warm up the buffer pool.

    • Command line: --innodb-buffer-pool-dump-now={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_buffer_pool_dump_pct

    • Description: Dump only the hottest N% of each .

    • Command line: --innodb-buffer-pool-dump-pct={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_buffer_pool_evict

    • Description: Evict pages from the buffer pool. If set to "uncompressed" then all uncompressed pages are evicted from the buffer pool. Variable to be used only for testing. Only exists in DEBUG builds.

    • Command line: --innodb-buffer-pool-evict=#

    • Scope: Global

    • Dynamic: Yes

    innodb_buffer_pool_filename

    • Description: The file that holds the list of page numbers set by and .

    • Command line: --innodb-buffer-pool-filename=file

    • Scope: Global

    • Dynamic: Yes

    innodb_buffer_pool_instances

    • Description: If is set to more than 1GB, innodb_buffer_pool_instances divides the buffer pool into the specified number of instances. The default was 1 in , but for large systems with buffer pools of many gigabytes, many instances could help reduce contention concurrency through . The default is 8 in MariaDB 10 (except on Windows 32-bit, where it varies according to , or from , where it is set to 1 if < 1GB). Each instance manages its own data structures and takes an equal portion of the total buffer pool size, so for example if innodb_buffer_pool_size is 4GB and innodb_buffer_pool_instances is set to 4, each instance are 1GB. Each instance should ideally be at least 1GB in size. Starting with , performance improvements intended to reduce the overhead of context-switching between buffer pools changed the recommended number of innodb_buffer_pool_instances to one for every 128GB of buffer pool size. Based on these changes, the variable is deprecated and ignored from , where the buffer pool runs in a single instance regardless of size.

    • Command line: --innodb-buffer-pool-instances=#

    innodb_buffer_pool_load_abort

    • Description: Aborts the process of restoring contents started by or .

    • Command line: --innodb-buffer-pool-load-abort={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_buffer_pool_load_at_startup

    • Description: Specifies whether the is automatically warmed up when the server starts by loading the pages held earlier. The related specifies whether pages are saved at shutdown. If the buffer pool is large and taking a long time to load, increasing at startup may help.

    • Command line: --innodb-buffer-pool-load-at-startup={0|1}

    • Scope: Global

    innodb_buffer_pool_load_now

    • Description: Immediately warms up the by loading the stored data pages. The related does the reverse, and immediately records pages stored in the buffer pool.

    • Command line: --innodb-buffer-pool-load-now={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_buffer_pool_load_pages_abort

    • Description: Number of pages during a buffer pool load to process before signaling . Debug builds only.

    • Command line: --innodb-buffer-pool-load-pages-abort=#

    • Scope: Global

    • Dynamic: Yes

    innodb_buffer_pool_populate

    • Description: When set to 1 (0 is default), XtraDB will preallocate pages in the buffer pool on starting up so that NUMA allocation decisions are made while the buffer cache is still clean. XtraDB only. This option was made ineffective in . Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: innodb-buffer-pool-populate={0|1}

    • Scope: Global

    innodb_buffer_pool_restore_at_startup

    • Description: Time in seconds between automatic buffer pool dumps. If set to a non-zero value, XtraDB will also perform an automatic restore of the at startup. If set to 0, automatic dumps are not performed, nor automatic restores on startup. Replaced by in .

    • Command line: innodb-buffer-pool-restore-at-startup

    • Scope: Global

    innodb_buffer_pool_shm_checksum

    • Description: Used with Percona's SHM buffer pool patch in XtraDB 5.5. Was shortly deprecated and removed in XtraDB 5.6. XtraDB only.

    • Command line: innodb-buffer-pool-shm-checksum={0|1}

    • Scope: Global

    • Dynamic: No

    innodb_buffer_pool_shm_key

    • Description: Used with Percona's SHM buffer pool patch in XtraDB 5.5. Later deprecated in XtraDB 5.5, and removed in XtraDB 5.6.

    • Command line: innodb-buffer-pool-shm-key={0|1}

    • Scope: Global

    • Dynamic: No

    innodb_buffer_pool_size

    • Description: InnoDB buffer pool size in bytes. The primary value to adjust on a database server with entirely/primarily tables, can be set up to 80% of the total memory in these environments. See the for more on setting this variable, and also if doing so dynamically.

    • Command line: --innodb-buffer-pool-size=#

    • Scope: Global

    innodb_buffer_pool_size_auto_min

    • Description: Minimum innodb_buffer_pool_size in bytes for dynamic shrinking on memory pressure. Only affects Linux. If a memory pressure event is reported by Linux, the innodb_buffer_pool_size may be automatically shrunk towards this value. By default, set to , that is, memory pressure events will be ignored. 0 sets no minimum value.

    • Command line: --innodb-buffer-pool-size-auto-min=#

    • Scope: Global

    innodb_buffer_pool_size_max

    • Description: Maximum innodb_buffer_pool_size value.

    • Command line: --innodb-buffer-pool-size-max=#

    • Scope: Global

    • Dynamic: No

    innodb_change_buffer_dump

    • Description: If set, causes the contents of the InnoDB change buffer to be dumped to the server error log at startup. Only available in debug builds.

    • Scope: Global

    • Dynamic: No

    • Data Type: boolean

    innodb_change_buffer_max_size

    • Description: Maximum size of the as a percentage of the total buffer pool. The default is 25%, and this can be increased up to 50% for servers with high write activity, and lowered down to 0 for servers used exclusively for reporting.

    • Command line: --innodb-change-buffer-max-size=#

    • Scope: Global

    • Dynamic: Yes

    innodb_change_buffering

    • Description: Sets how change buffering is performed. See for details on the settings. Deprecated and ignored from .

    • Command line: --innodb-change-buffering=#

    • Scope: Global

    • Dynamic: Yes

    innodb_change_buffering_debug

    • Description: If set to 1, an debug flag is set. 1 forces all changes to the change buffer, while 2 causes a crash at merge. 0, the default, indicates no flag is set. Only available in debug builds.

    • Command line: --innodb-change-buffering-debug=#

    innodb_checkpoint_age_target

    • Description: The maximum value of the checkpoint age. If set to 0, has no effect. Removed in /XtraDB 5.6 and replaced with InnoDB flushing method from MySQL 5.6.

    • Command line: innodb-checkpoint-age-target=#

    • Scope: Global

    innodb_checksum_algorithm

    • Description: Specifies how the InnoDB tablespace checksum is generated and verified.

      • innodb: Backwards compatible with earlier versions (<= ). Deprecated in , , and removed in . If really needed, data files can still be converted with .

      • crc32: A newer, faster algorithm, but incompatible with earlier versions. Tablespace blocks are converted to the new format over time, meaning that a mix of checksums may be present.

    innodb_checksums

    • Description: By default, performs checksum validation on all pages read from disk, which provides extra fault tolerance. You would usually want this set to 1 in production environments, although setting it to 0 can provide marginal performance improvements. Deprecated and functionality replaced by in , and should be removed to avoid conflicts. ON is equivalent to --innodb_checksum_algorithm=innodb and OFF to --innodb_checksum_algorithm=none.

    • Command line: --innodb-checksums

    innodb_cleaner_lsn_age_factor

    • Description: XtraDB has enhanced page cleaner heuristics, and with these in place, the default InnoDB adaptive flushing may be too aggressive. As a result, a new LSN age factor formula has been introduced, controlled by this variable. The default setting, high_checkpoint, uses the new formula, while the alternative, legacy, uses the original algorithm. XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: --innodb-cleaner-lsn-age-factor=value

    • Scope: Global

    innodb_cmp_per_index_enabled

    • Description: If set to ON (OFF is default), per-index compression statistics are stored in the table. These are expensive to record, so this setting should only be changed with care, such as for performance tuning on development or replica servers.

    • Command line: --innodb-cmp-per-index-enabled={0|1}

    • Scope: Global

    innodb_commit_concurrency

    • Description: Limit to the number of transaction threads that can commit simultaneously. 0, the default, imposes no limit. While you can change from one positive limit to another at runtime, you cannot set this variable to 0, or change it from 0, while the server is running. Deprecated and ignored from .

    • Command line: --innodb-commit-concurrency=#

    • Scope: Global

    • Dynamic: Yes

    innodb_compression_algorithm

    • Description: Compression algorithm used for . The supported values are:

      • none: Pages are not compressed.

      • zlib: Pages are compressed using the bundled compression algorithm.

    innodb_compression_default

    • Description: Whether or not is enabled by default for new tables.

      • The default value is OFF, which means new tables are not compressed.

      • See for more information.

    • Command line:

    innodb_compression_failure_threshold_pct

    • Description: Specifies the percentage cutoff for expensive compression failures during updates to a table that uses , after which free space is added to each new compressed page, dynamically adjusted up to the level set by . Zero disables checking of compression efficiency and adjusting padding.

      • See for more information.

    • Command line: --innodb-compression-failure-threshold-pct=#

    innodb_compression_level

    • Description: Specifies the default level of compression for tables that use .

      • Only a subset of InnoDB page compression algorithms support compression levels. If an InnoDB page compression algorithm does not support compression levels, then the compression level value is ignored.

      • The compression level can be set to any value between 1 and 9. The default compression level is 6. The range goes from the fastest to the most compact, which means that

    innodb_compression_pad_pct_max

    • Description: The maximum percentage of reserved free space within each compressed page for tables that use . Reserved free space is used when the page's data is reorganized and might be recompressed. Only used when is not zero, and the rate of compression failures exceeds its setting.

      • See for more information.

    • Command line: --innodb-compression-pad-pct-max=#

    innodb_concurrency_tickets

    • Description: Number of times a newly-entered thread can enter and leave until it is again subject to the limitations of and may possibly be queued. Deprecated and ignored from .

    • Command line: --innodb-concurrency-tickets=#

    • Scope: Global

    • Dynamic: Yes

    innodb_corrupt_table_action

    • Description: What action to perform when a corrupt table is found. XtraDB only.

      • When set to assert, the default, XtraDB will intentionally crash the server when it detects corrupted data in a single-table tablespace, with an assertion failure.

      • When set to warn, it will pass corruption as corrupt table instead of crashing, and disable all further I/O (except for deletion) on the table file.

    innodb_data_file_buffering

    • Description: Whether to enable the file system cache for data files. Set to OFF by default, are set to ON if is set to fsync, littlesync, nosync, or (Windows specific) normal.

    • Command line: --innodb-data-file-buffering={0|1}

    innodb_data_file_path

    • Description: Individual data files, paths and sizes. The value of is joined to each path specified by innodb_data_file_path to get the full directory path. If innodb_data_home_dir is an empty string, absolute paths can be specified here. A file size is specified (with K for kilobytes, M for megabytes and G for gigabytes). Also whether or not to autoextend the data file, and whether or not to on startup may also be specified.

    • Command line: --innodb-data-file-path=name

    • Scope: Global

    innodb_data_file_write_through

    • Description: Whether writes to InnoDB data files (including the temporary tablespace) are write through. Set to OFF by default, are set to ON if is set to O_DSYNC. On systems that support FUA it may make sense to enable write-through, to avoid extra system calls.

    • Command line: --innodb-data-file-write-through={0|1}

    • Scope: Global

    innodb_data_home_dir

    • Description: Directory path for all data files in the shared tablespace (assuming is not enabled). File-specific information can be added in , as well as absolute paths if innodb_data_home_dir is set to an empty string.

    • Command line: --innodb-data-home-dir=path

    • Scope: Global

    • Dynamic: No

    innodb_deadlock_detect

    • Description: By default, the InnoDB deadlock detector is enabled. If set to off, deadlock detection is disabled and MariaDB will rely on instead. This may be more efficient in systems with high concurrency as deadlock detection can cause a bottleneck when a number of threads have to wait for the same lock.

    • Command line: --innodb-deadlock-detect

    • Scope: Global

    • Dynamic: Yes

    innodb_deadlock_report

    • Description: How to report deadlocks (if ).

      • off: Do not report any details of deadlocks.

      • basic: Report transactions and waiting locks.

    innodb_default_page_encryption_key

    • Description: Encryption key used for page encryption.

      • See and for more information.

    • Command line: --innodb-default-page-encryption-key=#

    • Scope: Global

    innodb_default_encryption_key_id

    • Description: ID of encryption key used by default to encrypt InnoDB tablespaces.

      • See and for more information.

    • Command line: --innodb-default-encryption-key-id=#

    • Scope: Global, Session

    innodb_default_row_format

    • Description: Specifies the default to be used for InnoDB tables. The compressed row format cannot be set as the default.

      • See for more information.

    • Command line: --innodb-default-row-format=value

    innodb_defragment

    • Description: When set to 1 (the default is 0), InnoDB defragmentation is enabled. When set to FALSE, all existing defragmentation are paused and new defragmentation commands will fail. Paused defragmentation commands will resume when this variable is set to true again. See .

    • Command line: --innodb-defragment={0|1}

    • Scope: Global

    innodb_defragment_fill_factor

    • Description:. Indicates how full defragmentation should fill a page. Together with ensures defragmentation won’t pack the page too full and cause page split on the next insert on every page. The variable indicating more defragmentation gain is the one effective. See .

    • Command line: --innodb-defragment-fill-factor=#

    • Scope: Global

    • Dynamic: Yes

    innodb_defragment_fill_factor_n_recs

    • Description: Number of records of space that defragmentation should leave on the page. This variable, together with , is introduced so defragmentation won't pack the page too full and cause page split on the next insert on every page. The variable indicating more defragmentation gain is the one effective. See .

    • Command line: --innodb-defragment-fill-factor-n-recs=#

    • Scope: Global

    innodb_defragment_frequency

    • Description: Maximum times per second for defragmenting a single index. This controls the number of times the defragmentation thread can request X_LOCK on an index. The defragmentation thread will check whether 1/defragment_frequency (s) has passed since it last worked on this index, and put the index back in the queue if not enough time has passed. The actual frequency can only be lower than this given number. See .

    • Command line: --innodb-defragment-frequency=#

    • Scope: Global

    innodb_defragment_n_pages

    • Description: Number of pages considered at once when merging multiple pages to defragment. See .

    • Command line: --innodb-defragment-n-pages=#

    • Scope: Global

    • Dynamic: Yes

    innodb_defragment_stats_accuracy

    • Description: Number of defragment stats changes there are before the stats are written to persistent storage. Defaults to zero, meaning disable defragment stats tracking. See .

    • Command line: --innodb-defragment-stats-accuracy=#

    • Scope: Global

    • Dynamic: Yes

    innodb_dict_size_limit

    • Description: Size in bytes of a soft limit the memory used by tables in the data dictionary. Once this limit is reached, XtraDB will attempt to remove unused entries. If set to 0, the default and standard InnoDB behavior, there is no limit to memory usage. Removed in /XtraDB 5.6 and replaced by MySQL 5.6's new implementation.

    • Command line: innodb-dict-size-limit=#

    • Scope: Global

    innodb_disable_sort_file_cache

    • Description: If set to 1 (0 is default), the operating system file system cache for merge-sort temporary files is disabled.

    • Command line: --innodb-disable-sort-file-cache={0|1}

    • Scope: Global

    innodb_disallow_writes

    • Description: Tell InnoDB to stop any writes to disk.

    • Command line: None

    • Scope: Global

    • Dynamic: Yes

    innodb_doublewrite

    • Description: If set to ON, the default, to improve fault tolerance first stores data to a before writing it to data file. Disabling will provide a marginal performance improvement, and assumes that writes of are atomic. fast is available from , and is like ON, but writes are not synchronized to data files. The deprecated start-up parameter will cause innodb_doublewrite=ON to be changed to innodb_doublewrite=fast, which will prevent InnoDB from making any durable writes to data files. This would normally be done right before the log checkpoint LSN is updated. Depending on the file systems being used and their configuration, this may or may not be safe.

      The value innodb_doublewrite=fast differs from the previous combination of innodb_doublewrite=ON and innodb_flush_method=O_DIRECT_NO_FSYNC by always invoking os_file_flush() on the doublewrite buffer itself in buf_dblwr_t::flush_buffered_writes_completed(). This should be safer when there are multiple doublewrite batches between checkpoints.

      Typically, once per second, buf_flush_page_cleaner() would write out up to innodb_io_capacity pages and advance the log checkpoint. Also typically, innodb_io_capacity>128, which is the size of the doublewrite buffer in pages. Should os_file_flush_func() not be invoked between doublewrite batches, writes could be reordered in an unsafe way.

    innodb_doublewrite_file

    • Description: The absolute or relative path and filename to a dedicated tablespace for the . In heavy workloads, the doublewrite buffer can impact heavily on the server, and moving it to a different drive will reduce contention on random reads. Since the doublewrite buffer is mostly sequential writes, a traditional HDD is a better choice than SSD. This Percona XtraDB variable has not been ported to XtraDB 5.6.

    • Command line: innodb-doublewrite-file=filename

    • Scope: Global

    innodb_empty_free_list_algorithm

    • Description: XtraDB 5.6.13-61 introduced an algorithm to assist with reducing mutex contention when the buffer pool free list is empty, controlled by this variable. If set to backoff, the default until , the new algorithm are used. If set to legacy, the original InnoDB algorithm are used. XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades. See for the reasons this was changed back to legacy in XtraDB 5.6.36-82.0. When upgrading from 10.0 to 10.1 (>= 10.1.24), for large buffer pools the default will remain backoff, while for small ones it are changed to legacy.

    • Command line: innodb-empty-free-list-algorithm=value

    innodb_enable_unsafe_group_commit

    • Description: Unneeded after XtraDB 1.0.5. If set to 0, the default, InnoDB will keep transactions between the transaction log and s in the same order. Safer, but slower. If set to 1, transactions can be group-committed, but there is no guarantee of the order being kept, and a small risk of the two logs getting out of sync. In write-intensive environments, can lead to a significant improvement in performance.

    • Command line: --innodb-enable-unsafe-group-commit

    • Scope: Global

    innodb_encrypt_log

    • Description: Enables encryption of the . This also enables encryption of some temporary files created internally by InnoDB, such as those used for merge sorts and row logs.

      • See and for more information.

    • Command line: --innodb-encrypt-log

    innodb_encrypt_tables

    • Description: Enables automatic encryption of all InnoDB tablespaces.

      • OFF - Disables table encryption for all new and existing tables that have the table option set to DEFAULT.

      • ON - Enables table encryption for all new and existing tables that have the table option set to DEFAULT

    innodb_encrypt_temporary_tables

    • Description: Enables automatic encryption of the InnoDB .

      • See and for more information.

    • Command line: --innodb-encrypt-temporary-tables={0|1}

    innodb_encryption_rotate_key_age

    • Description: Re-encrypt in background any page having a key older than this number of key versions. When setting up encryption, this variable must be set to a non-zero value. Otherwise, when you enable encryption through MariaDB won't be able to automatically encrypt any unencrypted tables.

      • See and for more information.

    • Command line: --innodb-encryption-rotate-key-age=#

    innodb_encryption_rotation_iops

    • Description: Use this many iops for background key rotation operations performed by the background encryption threads.

      • See and for more information.

    • Command line: --innodb-encryption-rotation_iops=#

    innodb_encryption_threads

    • Description: Number of background encryption threads performing background key rotation and . When setting up encryption, this variable must be set to a non-zero value. Otherwise, when you enable encryption through MariaDB won't be able to automatically encrypt any unencrypted tables. Recommended never be set higher than 255.

      • See and for more information.

    • Command line: --innodb-encryption-threads=#

    innodb_extra_rsegments

    • Description: Removed in XtraDB 5.5 and replaced by . Usually there is one rollback segment protected by single mutex, a source of contention in high write environments. This option specifies a number of extra user rollback segments. Changing the default will make the data readable by XtraDB only, and is incompatible with InnoDB. After modifying, the server must be slow-shutdown. If there is existing data, it must be dumped before changing, and re-imported after the change has taken effect.

    • Command line: --innodb-extra-rsegments=#

    • Scope: Global

    innodb_extra_undoslots

    • Description: Usually, InnoDB has 1024 undo slots in its rollback segment, so 1024 transactions can run in parallel. New transactions will fail if all slots are used. Setting this variable to 1 expands the available undo slots to 4072. Not recommended unless you get the Warning: cannot find a free slot for an undo log error in the error log, as it makes data files unusable for ibbackup, or MariaDB servers not run with this option. See also .

    • Command line: --innodb-extra-undoslots={0|1}

    • Scope: Global

    innodb_fake_changes

    • Description: From until , XtraDB-only option that enables the fake changes feature. In , setting up or restarting a replica can cause a replication reads to perform more slowly, as MariaDB is single-threaded and needs to read the data before it can execute the queries. This can be speeded up by prefetching threads to warm the server, replaying the statements and then rolling back at commit. This however has an overhead from locking rows only then to undo changes at rollback. Fake changes attempts to reduce this overhead by reading the rows for INSERT, UPDATE and DELETE statements but not updating them. The rollback is then very fast with little or nothing to do. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades. Not present in and beyond.

    • Command line: --innodb-fake-changes={0|1}

    innodb_fast_checksum

    • Description: Implements a more CPU efficient XtraDB checksum algorithm, useful for write-heavy loads with high I/O. If set to 1 on a server with tables that have been created with it set to 0, reads are slower, so tables should be recreated (dumped and reloaded). XtraDB will fail to start if set to 0 and there are tables created while set to 1. Replaced with in /XtraDB 5.6.

    • Command line: --innodb-fast-checksum={0|1}

    innodb_fast_shutdown

    • Description: The shutdown mode.

      • 0 - InnoDB performs a slow shutdown, including full purge (before , not always, due to ) and change buffer merge. Can be very slow, even taking hours in extreme cases.

      • 1 - the default, performs a fast shutdown, not performing a full purge or an insert buffer merge.

    innodb_fatal_semaphore_wait_threshold

    • Description: In MariaDB, the fatal semaphore timeout is configurable. This variable sets the maximum number of seconds for semaphores to time out in InnoDB.

    • Command line: --innodb-fatal-semaphore-wait-threshold=#

    • Scope: Global

    • Dynamic: No

    innodb_file_format

    • Description: File format for new tables. Can either be Antelope, the default and the original format, or Barracuda, which supports . Note that this value is also used when a table is re-created with an which requires a table copy. See for more on the file formats. Removed in 10.3.1 and restored as a deprecated and unused variable in 10.4.3 for compatibility purposes.

    • Command line: --innodb-file-format=value

    • Scope: Global

    innodb_file_format_check

    • Description: If set to 1, the default, checks the shared tablespace file format tag. If this is higher than the current version supported by XtraDB/InnoDB (for example Barracuda when only Antelope is supported), XtraDB/InnoDB will not start. If it the value is not higher, XtraDB/InnoDB starts correctly and the value is set to this value. If innodb_file_format_check is set to 0, no checking is performed. See for more on the file formats.

    • Command line: --innodb-file-format-check={0|1}

    • Scope: Global

    innodb_file_format_max

    • Description: The highest file format. This is set to the value of the file format tag in the shared tablespace on startup (see ). If the server later creates a higher table format, innodb_file_format_max is set to that value. See for more on the file formats.

    • Command line: --innodb-file-format-max=value

    • Scope: Global

    innodb_file_per_table

    • Description: If set to ON, then new tables are created with their own . If set to OFF, then new tables are created in the instead. is only available with file-per-table tablespaces. Note that this value is also used when a table is re-created with an which requires a table copy. Deprecated in as there's no benefit to setting to OFF, the original InnoDB default.

    • Command line: --innodb-file-per-table

    innodb_fill_factor

    • Description: Percentage of B-tree page filled during bulk insert (sorted index build). Used as a hint rather than an absolute value. Setting to 70, for example, reserves 30% of the space on each B-tree page for the index to grow in future.

    • Command line: --innodb-fill-factor=#

    • Scope: Global

    innodb_flush_log_at_timeout

    • Description: Interval in seconds to write and flush the . Before MariaDB 10, this was fixed at one second, which is still the default, but this can now be changed. It's usually increased to reduce flushing and avoid impacting performance of binary log group commit.

    • Scope: Global

    • Dynamic: Yes

    • Data Type: numeric

    innodb_flush_log_at_trx_commit

    • Description: Set to 1, along with for the greatest level of fault tolerance. The value of determines whether this variable can be reset with a SET statement or not.

      • 1 The default, the log buffer is written to the file and a flush to disk performed after each transaction. This is required for full ACID compliance.

      • 0 Nothing is done on commit; rather the log buffer is written and flushed to the

    innodb_flush_method

    • Description: flushing method. Windows always uses async_unbuffered and this variable then has no effect. On Unix, before , by default fsync() is used to flush data and logs. Adjusting this variable can give performance improvements, but behavior differs widely on different filesystems, and changing from the default has caused problems in some situations, so test and benchmark carefully before adjusting. In MariaDB, Windows recognises and correctly handles the Unix methods, but if none are specified it uses own default - unbuffered write (analog of O_DIRECT) + syncs (e.g FileFlushBuffers()) for all files.

      • O_DSYNC - O_DSYNC is used to open and flush logs, and fsync() to flush the data files.

      • O_DIRECT

    innodb_flush_neighbor_pages

    • Description: Determines whether, when dirty pages are flushed to the data file, neighboring pages in the data file are flushed at the same time. If set to none, the feature is disabled. If set to area, the default, the standard InnoDB behavior is used. For each page to be flushed, dirty neighboring pages are flushed too. If there's little head seek delay, such as SSD or large enough write buffer, one of the other two options may be more efficient. If set to cont, for each page to be flushed, neighboring contiguous blocks are flushed at the same time. Being contiguous, a sequential I/O is used, unlike the random I/O used in area. Replaced by in /XtraDB 5.6.

    • Command line: innodb-flush-neighbor-pages=value

    innodb_flush_neighbors

    • Description: Determines whether flushing a page from the will flush other dirty pages in the same group of pages (extent). In high write environments, if flushing is not aggressive enough, it can fall behind resulting in higher memory usage, or if flushing is too aggressive, cause excess I/O activity. SSD devices, with low seek times, would be less likely to require dirty neighbor flushing to be set. Since an attempt is made under Windows and Linux to determine SSD status which was exposed in . This variable is ignored for table spaces that are detected as stored on SSD (and the 0 behavior applies).

      • 1: The default, flushes contiguous dirty pages in the same extent from the buffer pool.

    innodb_flush_sync

    • Description: If set to ON, the default, the setting is ignored for I/O bursts occurring at checkpoints.

    • Command line: --innodb-flush-sync={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_flushing_avg_loops

    • Description: Determines how quickly adaptive flushing will respond to changing workloads. The value is the number of iterations that a previously calculated flushing state snapshot is kept. Increasing the value smooths and slows the rate that the flushing operations change, while decreasing it causes flushing activity to spike quickly in response to workload changes.

    • Command line: --innodb-flushing-avg-loops=#

    • Scope: Global

    innodb_force_load_corrupted

    • Description: Set to 0 by default, if set to 1, are permitted to load tables marked as corrupt. Only use this to recover data you can't recover any other way, or in troubleshooting. Always restore to 0 when the returning to regular use. Given that in aims to allow any metadata for a missing or corrupted table to be dropped, and given that and and related tasks made DDL operations crash-safe, the parameter no longer serves any purpose and was removed in .

    • Command line: --innodb-force-load-corrupted

    innodb_force_primary_key

    • Description: If set to 1 (0 is default) CREATE TABLEs without a primary or unique key where all keyparts are NOT NULL will not be accepted, and will return an error.

    • Command line: --innodb-force-primary-key

    • Scope: Global

    innodb_force_recovery

    • Description: crash recovery mode. 0 is the default. The other modes are for recovery purposes only, and no data can be changed while another mode is active. Some queries relying on indexes are also blocked. See for more on mode specifics.

    • Command line: --innodb-force-recovery=#

    • Scope: Global

    innodb_foreground_preflush

    • Description: Before XtraDB 5.6.13-61.0, if the checkpoint age is in the sync preflush zone while a thread is writing to the , it will try to advance the checkpoint by issuing a flush list flush batch if this is not already being done. XtraDB has enhanced page cleaner tuning, and may already be performing furious flushing, resulting in the flush simply adding unneeded mutex pressure. Instead, the thread now waits for the flushes to finish, and then has two options, controlled by this variable. XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

      • exponential_backoff - thread sleeps while it waits for the flush list flush to occur. The sleep time randomly progressively increases, periodically reset to avoid runaway sleeps.

    innodb_ft_aux_table

    • Description: Diagnostic variable intended only to be set at runtime. It specifies the qualified name (for example test/ft_innodb) of an InnoDB table that has a , and after being set the INFORMATION_SCHEMA tables , , INNODB_FT_CONFIG, , and will contain search index information for the specified table.

    • Command line: --innodb-ft-aux-table=value

    • Scope: Global

    innodb_ft_cache_size

    • Description: Cache size available for a parsed document while creating an InnoDB .

    • Command line: --innodb-ft-cache-size=#

    • Scope: Global

    • Dynamic: No

    innodb_ft_enable_diag_print

    • Description: If set to 1, additional search diagnostic output is enabled.

    • Command line: --innodb-ft-enable-diag-print={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_ft_enable_stopword

    • Description: If set to 1, the default, a set of is associated with an InnoDB when it is created. The stopword list comes from the table set by the session variable , if set, otherwise the global variable , if that is set, or the if neither variable is set.

    • Command line: --innodb-ft-enable-stopword={0|1}

    • Scope: Global

    innodb_ft_max_token_size

    • Description: Maximum length of words stored in an InnoDB . A larger limit will increase the size of the index, slowing down queries, but permit longer words to be searched for. In most normal situations, longer words are unlikely search terms.

    • Command line: --innodb-ft-max-token-size=#

    • Scope: Global

    • Dynamic: No

    innodb_ft_min_token_size

    • Description: Minimum length of words stored in an InnoDB . A smaller limit will increase the size of the index, slowing down queries, but permit shorter words to be searched for. For data stored in a Chinese, Japanese or Korean , a value of 1 should be specified to preserve functionality.

    • Command line: --innodb-ft-min-token-size=#

    • Scope: Global

    innodb_ft_num_word_optimize

    • Description: Number of words processed during each on an InnoDB . To ensure all changes are incorporated, multiple OPTIMIZE TABLE statements could be run in case of a substantial change to the index.

    • Command line: --innodb-ft-num-word-optimize=#

    • Scope: Global

    • Dynamic: Yes

    innodb_ft_result_cache_limit

    • Description: Limit in bytes of the InnoDB query result cache per fulltext query. The latter stages of the full-text search are handled in memory, and limiting this prevents excess memory usage. If the limit is exceeded, the query returns an error.

    • Command line: --innodb-ft-result-cache-limit=#

    • Scope: Global

    • Dynamic: Yes

    innodb_ft_server_stopword_table

    • Description: Table name containing a list of stopwords to ignore when creating an InnoDB , in the format db_name/table_name. The specified table must exist before this option is set, and must be an InnoDB table with a single column, a named VALUE. See also .

    • Command line: --innodb-ft-server-stopword-table=db_name/table_name

    • Scope: Global

    innodb_ft_sort_pll_degree

    • Description: Number of parallel threads used when building an InnoDB . See also .

    • Command line: --innodb-ft-sort-pll-degree=#

    • Scope: Global

    • Dynamic: No

    innodb_ft_total_cache_size

    • Description:Total memory allocated for the cache for all InnoDB tables. A force sync is triggered if this limit is exceeded.

    • Command line: --innodb-ft-total-cache-size=#

    • Scope: Global

    • Dynamic: No

    innodb_ft_user_stopword_table

    • Description: Table name containing a list of stopwords to ignore when creating an InnoDB , in the format db_name/table_name. The specified table must exist before this option is set, and must be an InnoDB table with a single column, a named VALUE. See also .

    • Command line: --innodb-ft-user-stopword-table=db_name/table_name

    • Scope: Session

    innodb_ibuf_accel_rate

    • Description: Allows the insert buffer activity to be adjusted. The following formula is used: [real activity] = [default activity] * (innodb_io_capacity/100) * (innodb_ibuf_accel_rate/100). As innodb_ibuf_accel_rate is increased from its default value of 100, the lowest setting, insert buffer activity is increased. See also . This Percona XtraDB variable has not been ported to XtraDB 5.6.

    • Command line: innodb-ibuf-accel-rate=#

    • Scope: Global

    innodb_ibuf_active_contract

    • Description: Specifies whether the insert buffer can be processed before it's full. If set to 0, the standard InnoDB method is used, and the buffer is not processed until it's full. If set to 1, the default, the insert buffer can be processed before it is full. This Percona XtraDB variable has not been ported to XtraDB 5.6.

    • Command line: innodb-ibuf-active-contract=#

    • Scope: Global

    innodb_ibuf_max_size

    • Description: Maximum size in bytes of the insert buffer. Defaults to half the size of the so you may want to reduce if you have a very large buffer pool. If set to 0, the insert buffer is disabled, which will cause all secondary index updates to be performed synchronously, usually at a cost to performance. This Percona XtraDB variable has not been ported to XtraDB 5.6.

    • Command line: innodb-ibuf-max-size=#

    • Scope: Global

    innodb_idle_flush_pct

    • Description: Up to what percentage of dirty pages should be flushed when innodb finds it has spare resources to do so. Has had no effect since merging InnoDB 5.7 from mysql-5.7.9 (). Deprecated in , , and removed in .

    • Command line: --innodb-idle-flush-pct=#

    • Scope: Global

    innodb_immediate_scrub_data_uncompressed

    • Description: Enable scrubbing of data. See .

    • Command line: --innodb-immediate-scrub-data-uncompressed={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_import_table_from_xtrabackup

    • Description: If set to 1, permits importing of .ibd files exported with the --export option. Previously named innodb_expand_import. Removed in /XtraDB 5.6 and replaced with MySQL 5.6's transportable tablespaces.

    • Command line: innodb-import-table-from-xtrabackup=#

    • Scope: Global

    innodb_instant_alter_column_allowed

    • Description:

      • If a table is altered using ALGORITHM=INSTANT, it can force the table to use a non-canonical format: A hidden metadata record at the start of the clustered index is used to store each column's DEFAULT value. This makes it possible to add new columns that have default values without rebuilding the table. Starting with , a BLOB in the hidden metadata record is used to store column mappings. This makes it possible to drop or reorder columns without rebuilding the table. This also makes it possible to add columns to any position or drop columns from any position in the table without rebuilding the table. If a column is dropped without rebuilding the table, old records will contain garbage in that column's former position, and new records are written with NULL values, empty strings, or dummy values.

      • This is generally not a problem. However, there may be cases where you want to avoid putting a table into this format. For example, to ensure that future UPDATE operations after an ADD COLUMN are performed in-place, to reduce write amplification. (Instantly added columns are essentially always variable-length.) Also avoid bugs similar to

    innodb_instrument_semaphores

    • Description: Enable semaphore request instrumentation. This could have some effect on performance but allows better information on long semaphore wait problems.

    • Command line: --innodb-instrument-semaphores={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_io_capacity

    • Description: Limit on I/O activity for InnoDB background tasks, including merging data from the insert buffer and flushing pages. Should be set to around the number of I/O operations per second that system can handle, based on the type of drive/s being used. You can also set it higher when the server starts to help with the extra workload at that time, and then reduce for normal use. Ideally, opt for a lower setting, as at higher value data is removed from the buffers too quickly, reducing the effectiveness of caching. See also .

      • See for more information.

    • Command line: --innodb-io-capacity=#

    innodb_io_capacity_max

    • Description: Upper limit to which InnoDB can extend in case of emergency. See for more information.

    • Command line: --innodb-io-capacity-max=#

    • Scope: Global

    • Dynamic: Yes

    innodb_kill_idle_transaction

    • Description: Time in seconds before killing an idle XtraDB transaction. If set to 0 (the default), the feature is disabled. Used to prevent accidental user locks. XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Scope: Global

    • Dynamic: Yes

    • Data Type:

    innodb_large_prefix

    • Description: If set to 1, tables that use specific are permitted to have index key prefixes up to 3072 bytes (for 16k pages, ). If not set, the limit is 767 bytes.

      • This applies to the and row formats.

      • Removed in 10.3.1 and restored as a deprecated and unused variable in 10.4.3 for compatibility purposes.

    innodb_lazy_drop_table

    • Description: Deprecated and removed in XtraDB 5.6. processing can take a long time when is set to 1 and there's a large . If innodb_lazy_drop_table is set to 1 (0 is default), XtraDB attempts to optimize processing by deferring the dropping of related pages from the until there is time, only initially marking them.

    • Command line: innodb-lazy-drop-table={0|1}

    innodb_lock_schedule_algorithm

    • Description: Removed in due to problems with the VATS implementation (). Specifies the algorithm that InnoDB uses to decide which of the waiting transactions should be granted the lock once it has been released. The possible values are: FCFS (First-Come-First-Served) where locks are granted in the order they appear in the lock queue and VATS (Variance-Aware-Transaction-Scheduling) where locks are granted based on the Eldest-Transaction-First heuristic. Note that VATS should not be used with , and InnoDB will refuse to start if VATS is used with Galera. It is also not recommended to set to VATS even in the general case (). From , the value was changed to FCFS and a warning produced when using Galera.

    innodb_lock_wait_timeout

    • Description: Time in seconds that an InnoDB transaction waits for an InnoDB record lock (or table lock) before giving up with the error ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction. When this occurs, the statement (not transaction) is rolled back. The whole transaction can be rolled back if the option is used. Increase this for data warehousing applications or where other long-running operations are common, or decrease for OLTP and other highly interactive applications. This setting does not apply to deadlocks, which InnoDB detects immediately, rolling back a deadlocked transaction. 0 means no wait. See . Setting to 100000000 or more (from , 100000000 is the maximum) means the timeout is infinite.

    • Command line: --innodb-lock-wait-timeout=#

    innodb_locking_fake_changes

    • Description: From to , XtraDB-only option that if set to OFF, fake transactions (see ) don't take row locks. This is an experimental feature to attempt to deal with drawbacks in fake changes blocking real locks. It is not safe for use in all environments. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: --innodb-locking-fake-changes

    • Scope: Global

    innodb_locks_unsafe_for_binlog

    • Description: Set to 0 by default, in which case XtraDB/InnoDB uses . If set to 1, gap locking is disabled for searches and index scans. Deprecated in , and removed in , use instead.

    • Command line: --innodb-locks-unsafe-for-binlog

    • Scope: Global

    innodb_log_arch_dir

    • Description: The directory for archiving. XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: --innodb-log-arch-dir=name

    • Scope: Global

    • Dynamic: No

    innodb_log_arch_expire_sec

    • Description: Time in seconds since the last change after which the archived should be deleted. XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: --innodb-log-arch-expire-sec=#

    • Scope: Global

    • Dynamic: Yes

    innodb_log_archive

    • Description: Whether or not archiving is enabled. XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: --innodb-log-archive={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_log_block_size

    • Description: Size in bytes of the records. Generally 512, the default, or 4096, are the only two useful values. If the server is restarted and this value is changed, all old log files need to be removed. Should be set to 4096 for SSD cards or if is set to ALL_O_DIRECT on ext4 filesystems. XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: innodb-log-block-size=#

    innodb_log_buffer_size

    • Description: Size in bytes of the buffer for writing files to disk. Increasing this means larger transactions can run without needing to perform disk I/O before committing.

    • Command line: --innodb-log-buffer-size=#

    • Scope: Global

    • Dynamic: No

    innodb_log_checkpoint_now

    • Description: Write back dirty pages from the and update the log checkpoint. Prior to , , was only available in debug builds. Introduced in order to force checkpoints before a backup, allowing mariadb-backup to create much smaller incremental backups. However, this comes at the cost of heavy I/O usage and it is now disabled by default.

    • Command line: --innodb-log-checkpoint{=1|0}

    • Scope: Global

    innodb_log_checksum_algorithm

    • Description: Experimental feature (as of ), this variable specifies how to generate and verify checksums. XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

      • none - No checksum. A constant value is instead written to logs, and no checksum validation is performed.

      • innodb - The default, and the original InnoDB algorithm. This is inefficient, but compatible with all MySQL, MariaDB and Percona versions that don't support other checksum algorithms.

    innodb_log_checksums

    • Description: If set to 1, the CRC32C for Innodb or innodb_log_checksum_algorithm for XtraDB algorithm is used for pages. If disabled, the checksum field contents are ignored. From , the variable is deprecated, and checksums are always calculated, as previously, the InnoDB redo log used the slow innodb algorithm, but with hardware or SIMD assisted CRC-32C computation being available, there is no reason to allow checksums to be disabled on the redo log.

    • Command line: innodb-log-checksums={0|1}

    • Scope: Global

    innodb_log_compressed_pages

    • Description: Whether or not images of recompressed pages are stored in the . Deprecated and ignored from .

    • Command line: --innodb-log-compressed-pages={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_log_file_buffering

    • Description: Whether the file system cache for ib_logfile0 is enabled. In , MariaDB disabled the file system cache on the InnoDB write-ahead log file (ib_logfile0) by default on Linux. With in particular, writing to the log via the file system cache typically improves throughput, especially on slow storage or at a small number of concurrent transactions. For other values of innodb_flush_log_at_trx_commit, direct writes were observed to be mostly but not always faster. Whether it pays off to disable the file system cache on the log may depend on the type of storage, the workload, and the operating system kernel version. If the server is started up with , the value are changed to ON. Will be set to OFF if is set to O_DSYNC. On Linux, when the physical block size cannot be determined to be a power of 2 between 64 and 4096 bytes, the file system cache cannot be disabled, and innodb_log_file_buffering=ON cannot be changed. Linux and Windows only.

    • Command line: --innodb-log-file-buffering={0|1}

    innodb_log_file_mmap

    • Description: Whether ib_logfile0 resides in persistent memory or should initially be memory-mapped. When using the default innodb_log_buffer_size=2m, mariadb-backup --backup would spend a lot of time re-reading and re-parsing the log. For reading the log file during mariadb-backup --backup, it is beneficial to memory-map the entire ib_logfile0 to the address space (typically 48 bits or 256 TiB) and read it from there, both during --backup and --prepare. OFF by default on most platforms, to avoid aggressive read-ahead of the entire ib_logfile0 in when only a tiny portion would be accessed. On Linux and FreeBSD the default is innodb_log_file_mmap=ON, because those platforms define a specific mmap(2) option for enabling such read-ahead and therefore it can be assumed that the default wouldbe on-demand paging. This parameter will only have impact on the initial InnoDB startup and recovery. Any writes to the log will use regular I/O, except when the ib_logfile0 is stored in a specially configured file system that is backed by persistent memory (Linux "mount -o dax").

    • Command line: --innodb-log-file-mmap{=0|1}

    innodb_log_file_size

    • Description: Size in bytes of each file in the log group. The combined size can be no more than 512GB. Larger values mean less disk I/O due to less flushing checkpoint activity, but also slower recovery from a crash. In , crash recovery has been improved and shouldn't run out of memory, so the default has been increased. It can safely be set higher to reduce checkpoint flushing, even larger than .From the variable is dynamic, and the server no longer needs to be restarted for the resizing to take place. Unless the log is located in a persistent memory file system (PMEM), an attempt to innodb_log_file_size to less than are refused. Log resizing can be aborted by killing the connection that is executing the SET GLOBAL statement.

    • Command line: --innodb-log-file-size=#

    • Scope: Global

    innodb_log_file_write_through

    • Description: Whether each write to ib_logfile0 is write through (disabling any caching, as in O_SYNC or O_DSYNC). Set to OFF by default, are set to ON if is set to O_DSYNC. On systems that support FUA it may make sense to enable write-through, to avoid extra system calls.

    • Command line: --innodb-log-file-write-through={0|1}

    • Scope: Global

    innodb_log_files_in_group

    • Description: Number of physical files in the . Deprecated and ignored from

    • Command line: --innodb-log-files-in-group=#

    • Scope: Global

    • Dynamic: No

    innodb_log_group_home_dir

    • Description: Path to the files. If none is specified, files named ib_logfile0 and so on, with a size of are created in the data directory.

    • Command line: --innodb-log-group-home-dir=path

    • Scope: Global

    • Dynamic: No

    innodb_log_optimize_ddl

    • Description: Whether activity should be reduced when natively creating indexes or rebuilding tables. Reduced logging requires additional page flushing and interferes with . Enabling this may slow down backup and cause delay due to page flushing. Deprecated and ignored from . Deprecated (but not ignored) from , and .

    • Command line: --innodb-log-optimize-ddl={0|1}

    • Scope: Global

    innodb_log_spin_wait_delay

    • Description: Delay between log buffer spin lock polls (0 to use a blocking latch). Specifically, enables a spin lock that will execute that many MY_RELAX_CPU() operations (such as the x86 PAUSE instruction) between successive attempts of acquiring the spin lock. On some hardware with certain workloads (observed on write intensive workloads on NUMA systems), the default setting results in a significant amount of time being spent in native_queued_spin_lock_slowpath() in the Linux kernel, plus context switching between user and kernel address space, in which case changing from the default (for example, setting to 50), may result in a performance improvement.

    • Command line: --innodb-log-spin-wait-delay=#

    • Scope: Global

    innodb_log_write_ahead_size

    • Description: write ahead unit size to avoid read-on-write. Should match the OS cache block IO size. Removed in , and instead on Linux and Windows, the physical block size of the underlying storage is detected and used. Reintroduced in and later versions. On Linux and Windows, the default or the specified innodb_log_write_ahead_size are automatically adjusted to not be less than the physical block size (if it can be determined).

    • Command line: --innodb-log-write-ahead-size=#

    • Scope: Global

    innodb_lru_flush_size

    • Description: Number of pages to flush on LRU eviction. Changes in , , , , and made this setting superfluous, and it is no longer used.

    • Command line: --innodb-lru-flush-size=#

    • Scope: Global

    innodb_lru_scan_depth

    • Description: Specifies how far down the buffer pool least-recently used (LRU) list the cleaning thread should look for dirty pages to flush. This process is performed once a second. In an I/O intensive-workload, can be increased if there is spare I/O capacity, or decreased if in a write-intensive workload with little spare I/O capacity.

      • See for more information.

    • Command line: --innodb-lru-scan-depth=#

    innodb_max_bitmap_file_size

    • Description: Limit in bytes of the changed page bitmap files. For faster incremental backup with , XtraDB tracks pages with changes written to them according to the and writes the information to special changed page bitmap files. These files are rotated when the server restarts or when this limit is reached. XtraDB only. See also and .

      • Deprecated and ignored in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: innodb-max-bitmap-file-size=#

    innodb_max_changed_pages

    • Description: Limit to the number of changed page bitmap files (stored in the ). Zero is unlimited. See and . Previously named innodb_changed_pages_limit. XtraDB only.

      • Deprecated and ignored in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: innodb-max-changed-pages=#

    innodb_max_dirty_pages_pct

    • Description: Maximum percentage of unwritten (dirty) pages in the buffer pool.

      • See for more information.

    • Command line: --innodb-max-dirty-pages-pct=#

    • Scope: Global

    innodb_max_dirty_pages_pct_lwm

    • Description: Low water mark percentage of dirty pages that will enable preflushing to lower the dirty page ratio. The value 0 (default) means 'refer to '. (Note that 0 meant 0 in 10.5.7 to 10.5.8, but was then reverted back to "same as innodb_max_dirty_pages_pct" again in 10.5.9)

      • See for more information.

    • Command line: --innodb-max-dirty-pages-pct-lwm=#

    innodb_max_purge_lag

    • Description: When purge operations are lagging on a busy server, setting innodb_max_purge_lag can help. By default set to 0, no lag, the figure is used to calculate a time lag for each INSERT, UPDATE, and DELETE when the system is lagging. InnoDB keeps a list of transactions with delete-marked index records due to UPDATE and DELETE statements. The length of this list is purge_lag, and the calculation, performed every ten seconds, is as follows: ((purge_lag/innodb_max_purge_lag)×10)–5 microseconds.

    • Command line: --innodb-max-purge-lag=#

    • Scope: Global

    innodb_max_purge_lag_delay

    • Description: Maximum delay in milliseconds imposed by the setting. If set to 0, the default, there is no maximum.

    • Command line: --innodb-max-purge-lag-delay=#

    • Scope: Global

    • Dynamic: Yes

    innodb_max_purge_lag_wait

    • Description: Wait until History list length is below the specified limit.

    • Command line: --innodb-max-purge-wait=#

    • Scope: Global

    • Dynamic: Yes

    innodb_max_undo_log_size

    • Description: If an undo tablespace is larger than this, it is marked for truncation if is set.

    • Command line: --innodb-max-undo-log-size=#

    • Scope: Global

    • Dynamic: Yes

    innodb_merge_sort_block_size

    • Description: Size in bytes of the block used for merge sorting in fast index creation. Replaced in /XtraDB 5.6 by .

    • Command line: innodb-merge-sort-block-size=#

    • Scope: Global

    • Dynamic: Yes

    innodb_mirrored_log_groups

    • Description: Unused. Restored as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Deprecated:

    • Removed: -

    innodb_mtflush_threads

    • Description: Sets the number of threads to use in Multi-Threaded Flush operations. For more information, see .

      • InnoDB's multi-thread flush feature was deprecated in and removed from . In later versions of MariaDB, use system variable instead.

      • See for more information.

    innodb_monitor_disable

    • Description: Disables the specified counters in the table.

    • Command line: --innodb-monitor-disable=string

    • Scope: Global

    • Dynamic: Yes

    innodb_monitor_enable

    • Description: Enables the specified counters in the table.

    • Command line: --innodb-monitor-enable=string

    • Scope: Global

    • Dynamic: Yes

    innodb_monitor_reset

    • Description: Resets the count value of the specified counters in the table to zero.

    • Command line: --innodb-monitor-reset=string

    • Scope: Global

    • Dynamic: Yes

    innodb_monitor_reset_all

    • Description: Resets all values for the specified counters in the table.

    • Command line: ---innodb-monitor-reset-all=string

    • Scope: Global

    • Dynamic: Yes

    innodb_numa_interleave

    • Description: Whether or not to use the NUMA interleave memory policy to allocate the . Before , required that MariaDB be compiled on a NUMA-enabled Linux system.

    • Command line: innodb-numa-interleave={0|1}

    • Scope: Global

    • Dynamic: No

    innodb_old_blocks_pct

    • Description: Percentage of the to use for the old block sublist.

    • Command line: --innodb-old-blocks-pct=#

    • Scope: Global

    • Dynamic: Yes

    innodb_old_blocks_time

    • Description: Time in milliseconds an inserted block must stay in the old sublist after its first access before it can be moved to the new sublist. '0' means "no delay". Setting a non-zero value can help prevent full table scans clogging the . See also .

    • Command line: --innodb-old-blocks-time=#

    • Scope: Global

    • Dynamic: Yes

    innodb_online_alter_log_max_size

    • Description: The maximum size for temporary log files during online DDL (data and index structure changes). The temporary log file is used for each table being altered, or index being created, to store data changes to the table while the process is underway. The table is extended by up to the limit set by this variable. If this limit is exceeded, the online DDL operation fails and all uncommitted changes are rolled back. A lower value reduces the time a table could lock at the end of the operation to apply all the log's changes, but also increases the chance of the online DDL changes failing.

    • Command line: --innodb-online-alter-log-max-size=#

    • Scope: Global

    innodb_open_files

    • Description: Maximum .ibd files MariaDB can have open at the same time. Only applies to systems with multiple XtraDB/InnoDB tablespaces, and is separate to the table cache and . The default, if is disabled, is 300 or the value of , whichever is higher. It will also auto-size up to the default value if it is set to a value less than 10.

    • Command line: --innodb-open-files=#

    • Scope: Global

    innodb_optimize_fulltext_only

    • Description: When set to 1 (0 is default), will only process InnoDB data. Only intended for use during fulltext index maintenance.

    • Command line: --innodb-optimize-fulltext-only={0|1}

    • Scope: Global

    innodb_page_cleaners

    • Description: Number of page cleaner threads. The default is 4, but the value are set to the number of if this is lower. If set to 1, only a single cleaner thread is used, as was the case until . Cleaner threads flush dirty pages from the , performing flush list and least-recently used (LRU) flushing. Deprecated and ignored from , as the original reasons for splitting the buffer pool have mostly gone away.

      • See for more information.

    • Command line:

    innodb_page_size

    • Description: Specifies the page size in bytes for all InnoDB tablespaces. The default, 16k, is suitable for most uses.

      • A smaller InnoDB page size might work more effectively in a situation with many small writes (OLTP), or with SSD storage, which usually has smaller block sizes.

      • A larger InnoDB page size can provide a larger .

    innodb_pass_corrupt_table

    • Removed: XtraDB 5.5 - renamed .

    innodb_prefix_index_cluster_optimization

    • Description: Enable prefix optimization to sometimes avoid cluster index lookups. Deprecated and ignored from , as the optimization is now always enabled.

    • Command line: --innodb-prefix-index-cluster-optimization={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_print_all_deadlocks

    • Description: If set to 1 (0 is default), all InnoDB transaction deadlock information is written to the .

    • Command line: --innodb-print-all-deadlocks={0|1}

    • Scope: Global

    innodb_purge_batch_size

    • Description: Number of pages to purge in one batch from the history list. Together with has a small effect on tuning.

    • Command line: --innodb-purge-batch-size=#

    • Scope: Global

    • Dynamic: No

    innodb_purge_rseg_truncate_frequency

    • Description: Frequency with which undo records are purged. Set by default to every 128 times, reducing this increases the frequency at which rollback segments are freed. See also . The motivation for introducing this in MySQL seems to have been to avoid stalls due to freeing undo log pages or truncating undo log tablespaces. In MariaDB, should be a much lighter operation because it will not involve any log checkpoint, hence this is deprecated and ignored from , , , , and . ()

    • Command line: -- innodb-purge-rseg-truncate-frequency=#

    • Scope: Global

    innodb_purge_threads

    • Description: Number of background threads dedicated to InnoDB purge operations. The range is 1 to 32. At least one background thread is always used. Setting to a value greater than 1 creates that many separate purge threads. This can improve efficiency in some cases, such as when performing DML operations on many tables. See also .

    • Command line: --innodb-purge-threads=#

    • Scope: Global

    innodb_random_read_ahead

    • Description: Originally, random read-ahead was always set as an optimization technique, but was removed in . innodb_random_read_ahead permits it to be re-instated if set to 1 (0) is default.

    • Command line: --innodb-random-read-ahead={0|1}

    • Scope: Global

    innodb_read_ahead

    • Description: If set to linear, the default, XtraDB/InnoDB will automatically fetch remaining pages if there are enough within the same extent that can be accessed sequentially. If set to none, read-ahead is disabled. random has been removed and is now ignored, while both sets to both linear and random. Also see for more control on read-aheads. Removed in /XtraDB 5.6 and replaced by MySQL 5.6's .

    • Command line: innodb-read-ahead=value

    innodb_read_ahead_threshold

    • Description: Minimum number of pages InnoDB must read sequentially from an extent of 64 before initiating an asynchronous read for the following extent.

    • Command line: --innodb-read-ahead-threshold=#

    • Scope: Global

    • Dynamic: Yes

    innodb_read_io_threads

    • Description: Prior to , this was simply the number of I/O threads for InnoDB reads. From , asynchronous I/O functionality in the InnoDB Background Thread Pool replaces the old InnoDB I/O Threads. This variable is now multiplied by 256 to determine the maximum number of concurrent asynchronous I/O read requests that can be completed by the Background Thread Pool. The default is therefore 4*256 = 1024 conccurrent asynchronous read requests. You may on rare occasions need to reduce this default on Linux systems running multiple MariaDB servers to avoid exceeding system limits, or increase if spending too much time waiting on I/O requests.

    • Command line: --innodb-read-io-threads=#

    • Scope: Global

    innodb_read_only

    • Description: If set to 1 (0 is default), the server are read-only. For use in distributed applications, data warehouses or read-only media.

    • Command line: --innodb-read-only={0|1}

    • Scope: Global

    innodb_read_only_compressed

    • Description: If set (the default before ), tables are read-only. This was intended to be the first step towards removing write support and deprecating the feature, but this plan has been abandoned.

    • Command line: --innodb-read-only-compressed, --skip-innodb-read-only-compressed

    • Scope: Global

    innodb_recovery_stats

    • Description: If set to 1 (0 is default) and recovery is necessary on startup, the server will write detailed recovery statistics to the error log at the end of the recovery process. This Percona XtraDB variable has not been ported to XtraDB 5.6.

    • Command line: No

    • Scope: Global

    innodb_recovery_update_relay_log

    • Description: If set to 1 (0 is default), the relay log info file are overwritten on crash recovery if the information differs from the InnoDB record. Should not be used if multiple storage engine types are being replicated. Previously named innodb_overwrite_relay_log_info. Removed in /XtraDB 5.6 and replaced by MySQL 5.6's relay-log-recovery

    • Command line: innodb-recovery-update-relay-log={0|1}

    innodb_replication_delay

    • Description: Time in milliseconds for the replica server to delay the replication thread if is reached. Deprecated and ignored from .

    • Command line: --innodb-replication-delay=#

    • Scope: Global

    • Dynamic: Yes

    innodb_rollback_on_timeout

    • Description: InnoDB usually rolls back the last statement of a transaction that's been timed out (see ). If innodb_rollback_on_timeout is set to 1 (0 is default), InnoDB will roll back the entire transaction. Before , rolling back the entire transaction was the default behavior.

    • Command line: --innodb-rollback-on-timeout

    • Scope: Global

    • Dynamic: No

    innodb_rollback_segments

    • Description: Specifies the number of rollback segments that XtraDB/InnoDB will use within a transaction (see ). Deprecated and replaced by in . Removed in as part of an InnoDB cleanup, as it makes sense to always create and use the maximum number of rollback segments. |

    • Command line: --innodb-rollback-segments=#

    • Scope: Global

    innodb_safe_truncate

    • Description: Use a backup-safe implementation and crash-safe rename operations inside InnoDB. This is not compatible with hot backup tools other than . Users who need to use such tools may set this to OFF.

    • Command line: --innodb-safe-truncate={0|1}

    • Scope: Global

    innodb_scrub_log

    • Description: Enable scrubbing. See . Deprecated and ignored from , as never really worked ( and ). If old log contents should be kept secret, then enabling or setting a smaller could help.

    • Command line: --innodb-scrub-log

    • Scope: Global

    innodb_scrub_log_interval

    • Description: Used with in 10.1.3 only - replaced in 10.1.4 by . scrubbing interval in milliseconds.

    • Command line: --innodb-scrub-log-interval=#

    • Scope: Global

    • Dynamic: Yes

    innodb_scrub_log_speed

    • Description: scrubbing speed in bytes/sec. See . Deprecated and ignored from .

    • Command line: --innodb-scrub-log-speed=#

    • Scope: Global

    • Dynamic: Yes

    innodb_sched_priority_cleaner

    • Description: Set a thread scheduling priority for cleaner and least-recently used (LRU) manager threads. The range from 0 to 39 corresponds in reverse order to Linux nice values of -20 to 19. So 0 is the lowest priority (Linux nice value 19) and 39 is the highest priority (Linux nice value -20). XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    innodb_show_locks_held

    • Description: Specifies the number of locks held for each InnoDB transaction to be displayed in output. XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: innodb-show-locks-held=#

    • Scope: Global

    • Dynamic: Yes

    innodb_show_verbose_locks

    • Description: If set to 1, and is also ON, the traditional InnoDB behavior is followed and locked records are shown in output. If set to 0, the default, only high-level information about the lock is shown. XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: innodb-show-verbose-locks=#

    • Scope: Global

    innodb_simulate_comp_failures

    • Description: Simulate compression failures. Used for testing robustness against random compression failures. XtraDB only.

    • Command line: None

    • Scope: Global

    • Dynamic: Yes

    innodb_snapshot_isolation

    • Description: Use snapshot isolation (write-write conflict detection). If set, if an attempt to acquire a lock on a record that does not exist in the current read view is made, an error DB_RECORD_CHANGED (HA_ERR_RECORD_CHANGED, ER_CHECKREAD) are raised. This error are treated in the same way as a deadlock and the transaction are rolled back. When set, the default isolation level, arecome Snapshot Isolation. Prior to , the default is OFF for backwards compatibility.

    • Command line: --innodb-snapshot-isolation={0|1}

    • Scope: Global, Session

    innodb_sort_buffer_size

    • Description: Size of the sort buffers used for sorting data when an InnoDB index is created, as well as the amount by which the temporary log file is extended during online DDL operations to record concurrent writes. The larger the setting, the fewer merge phases are required between buffers while sorting. When a or creates a new index, three buffers of this size are allocated, as well as pointers for the rows in the buffer.

    • Command line: --innodb-sort-buffer-size=#

    • Scope: Global

    innodb_spin_wait_delay

    • Description: Maximum delay (not strictly corresponding to a time unit) between spin lock polls. Default changed from 6 to 4 in , as this was verified to give the best throughput by OLTP update index and read-write benchmarks on Intel Broadwell (2/20/40) and ARM (1/46/46).

    • Command line: --innodb-log-spin-wait-delay=#

    • Scope: Global

    innodb_stats_auto_recalc

    • Description: If set to 1 (the default), persistent statistics are automatically recalculated when the table changes significantly (more than 10% of the rows). Affects tables created or altered with STATS_PERSISTENT=1 (see ), or when is enabled. determines how much data to sample when recalculating. See .

    • Command line: --innodb-stats-auto-recalc={0|1}

    • Scope: Global

    innodb_stats_auto_update

    • Description: If set to 0 (1 is default), index statistics will not be automatically calculated except when an is run, or the table is first opened. Replaced by in /XtraDB 5.6.

    • Scope: Global

    • Dynamic: Yes

    innodb_stats_include_delete_marked

    • Description: Include delete marked records when calculating persistent statistics.

    • Scope: Global

    • Dynamic: Yes

    • Data Type: boolean

    innodb_stats_method

    • Description: Determines how NULLs are treated for InnoDB index statistics purposes.

      • nulls_equal: The default, all NULL index values are treated as a single group. This is usually fine, but if you have large numbers of NULLs the average group size is slanted higher, and the optimizer may miss using the index for ref accesses when it would be useful.

      • nulls_unequal: The opposite approach to nulls_equal is taken, with each NULL forming its own group of one. Conversely, the average group size is slanted lower, and the optimizer may use the index for ref accesses when not suitable.

    innodb_stats_modified_counter

    • Description: The number of rows modified before we calculate new statistics. If set to 0, the default, current limits are used.

    • Command line: --innodb-stats-modified-counter=#

    • Scope: Global

    • Dynamic: Yes

    innodb_stats_on_metadata

    • Description: If set to 1, the default, XtraDB/InnoDB updates statistics when accessing the INFORMATION_SCHEMA.TABLES or INFORMATION_SCHEMA.STATISTICS tables, and when running metadata statements such as or . If set to 0, statistics are not updated at those times, which can reduce the access time for large schemas, as well as make execution plans more stable.

    • Command line: --innodb-stats-on-metadata

    • Scope: Global

    innodb_stats_persistent

    • Description: produces index statistics, and this setting determines whether they are stored on disk, or be required to be recalculated more frequently, such as when the server restarts. This information is stored for each table, and can be set with the STATS_PERSISTENT clause when creating or altering tables (see ). See .

    • Command line: --innodb-stats-persistent={0|1}

    • Scope: Global

    innodb_stats_persistent_sample_pages

    • Description: Number of index pages sampled when estimating cardinality and statistics for indexed columns. Increasing this value will increases index statistics accuracy, but use more I/O resources when running . See .

    • Command line: --innodb-stats-persistent-sample-pages=#

    • Scope: Global

    • Dynamic: Yes

    innodb_stats_sample_pages

    • Description: Gives control over the index distribution statistics by determining the number of index pages to sample. Higher values produce more disk I/O, but, especially for large tables, produce more accurate statistics and therefore make more effective use of the query optimizer. Lower values than the default are not recommended, as the statistics can be quite inaccurate.

      • If is enabled, then the exact number of pages configured by this system variable are sampled for statistics.

      • If is disabled, then the number of pages to sample for statistics is calculated using a logarithmic algorithm, so the exact number can change depending on the size of the table. This means that more samples may be used for larger tables.

    innodb_stats_traditional

    • Description: This system variable affects how the number of pages to sample for transient statistics is determined, in particular how is used.

      • If is enabled, then the exact number of pages configured by the system variable are sampled for statistics.

      • If is disabled, then the number of pages to sample for statistics is calculated using a logarithmic algorithm, so the exact number can change depending on the size of the table. This means that more samples may be used for larger tables.

    innodb_stats_transient_sample_pages

    • Description: Gives control over the index distribution statistics by determining the number of index pages to sample. Higher values produce more disk I/O, but, especially for large tables, produce more accurate statistics and therefore make more effective use of the query optimizer. Lower values than the default are not recommended, as the statistics can be quite inaccurate.

      • If is enabled, then the exact number of pages configured by this system variable are sampled for statistics.

      • If is disabled, then the number of pages to sample for statistics is calculated using a logarithmic algorithm, so the exact number can change depending on the size of the table. This means that more samples may be used for larger tables.

    innodb_stats_update_need_lock

    • Description: Setting to 0 (1 is default) may help reduce contention of the &dict_operation_lock, but also disables the Data_free option in . This Percona XtraDB variable has not been ported to XtraDB 5.6.

    • Scope: Global

    • Dynamic: Yes

    innodb_status_output

    • Description: Enable output to the .

    • Command line: --innodb-status-output={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_status_output_locks

    • Description: Enable output to the and . Also requires to enable output to the error log.

    • Command line: --innodb-status-output-locks={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_strict_mode

    • Description: If set to 1 (the default), InnoDB will return errors instead of warnings in certain cases, similar to strict SQL mode. See for details.

    • Command line: --innodb-strict-mode={0|1}

    • Scope: Global, Session

    innodb_support_xa

    • Description: If set to 1, the default, are supported. XA support ensures data is written to the in the same order to the actual database, which is critical for and disaster recovery, but comes at a small performance cost. If your database is set up to only permit one thread to change data (for example, on a replication replica with only the replication thread writing), it is safe to turn this option off. Removed in , XA transactions are always supported.

    • Command line: --innodb-support-xa

    • Scope: Global, Session

    innodb_sync_array_size

    • Description: By default 1, can be increased to split internal thread co-ordinating, giving higher concurrency when there are many waiting threads.

    • Command line: --innodb-sync-array-size=#

    • Scope: Global

    • Dynamic: No

    innodb_sync_spin_loops

    • Description: The number of times a thread waits for an InnoDB mutex to be freed before the thread is suspended.

    • Command line: --innodb-sync-spin-loops=#

    • Scope: Global

    • Dynamic: Yes

    innodb_table_locks

    • Description: If is set to 0 (1 is default), setting innodb_table_locks to 1, the default, will cause InnoDB to lock a table internally upon a .

    • Command line: --innodb-table-locks

    • Scope: Global, Session

    innodb_thread_concurrency

    • Description: Once this number of threads is reached (excluding threads waiting for locks), XtraDB/InnoDB will place new threads in a wait state in a first-in, first-out queue for execution, in order to limit the number of threads running concurrently. A setting of 0, the default, permits as many threads as necessary. A suggested setting is twice the number of CPU's plus the number of disks. Deprecated and ignored from .

    • Command line: --innodb-thread-concurrency=#

    • Scope: Global

    innodb_thread_concurrency_timer_based

    • Description: If set to 1, thread concurrency are handled in a lock-free timer-based manner rather than the default mutex-based method. Depends on atomic op builtins being available. This Percona XtraDB variable has not been ported to XtraDB 5.6.

    • Command line: innodb-thread-concurrency-timer-based={0|1}

    • Scope: Global

    innodb_thread_sleep_delay

    • Description: Time in microseconds that InnoDB threads sleep before joining the queue. Setting to 0 disables sleep. Deprecated and ignored from

    • Command line: --innodb-thread-sleep-delay=#

    • Scope: Global

    innodb_temp_data_file_path

    • Description: Path where to store data for temporary tables. Argument is filename:size followed by options separated by ':' Multiple paths can be given separated by ';' A file size is specified (with K for kilobytes, M for megabytes and G for gigabytes). Also whether or not to autoextend the data file, max size and whether or not to on startup may also be specified.

    • Command line: --innodb-temp-data-file-path=path

    innodb_tmpdir

    • Description: Allows an alternate location to be set for temporary non-tablespace files. If not set (the default), files are created in the usual location. Alternate location must be outside of datadir

    • Command line: --innodb-tmpdir=path

    • Scope: Global

    innodb_track_changed_pages

    • Description: For faster incremental backup with , XtraDB tracks pages with changes written to them according to the and writes the information to special changed page bitmap files. This read-only variable is used for controlling this feature. See also and . XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: innodb-track-changed-pages={0|1}

    • Scope: Global

    innodb_track_redo_log_now

    • Description: Available on debug builds only. XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: innodb-track-redo-log-now={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_truncate_temporary_tablespace_now

    • Description: Set to ON to shrink the temporary tablespace.

    • Command line: innodb-truncate-temporary-tablespace-now={0|1}

    • Scope: Global

    • Dynamic: Yes

    innodb_undo_directory

    • Description: Path to the directory (relative or absolute) that InnoDB uses to create separate tablespaces for the . . (the default value before 10.2.2) leaves the undo logs in the same directory as the other log files. From , the default value is NULL, and if no path is specified, undo tablespaces are created in the directory defined by . Use together with and . Undo logs are most usefully placed on a separate storage device.

    • Command line: --innodb-undo-directory=name

    • Scope: Global

    innodb_undo_log_truncate

    • Description: When enabled, that are larger than are marked for truncation. See also . Enabling this setting may cause stalls during heavy write workloads.

    • Command line: --innodb-undo-log-truncate[={0|1}]

    • Scope: Global

    • Dynamic: Yes

    innodb_undo_logs

    • Description: Specifies the number of rollback segments that XtraDB/InnoDB will use within a transaction (or the number of active ). By default set to the maximum, 128, it can be reduced to avoid allocating unneeded rollback segments. See the status variable for the number of undo logs available. See also and . Replaced in . The contains information about the XtraDB rollback segments. Deprecated and ignored in , as it always makes sense to use the maximum number of rollback segments.

    • Command line: --innodb-undo-logs=#

    innodb_undo_tablespaces

    • Description: Number of tablespaces files used for dividing up the . Zero (the default before ) means that undo logs are all part of the system tablespace, which contains one undo tablespace more than the innodb_undo_tablespaces setting. A value of 1 is reset to 0 as 2 or more are needed for separate tablespaces. When the undo logs can grow large, splitting them over multiple tablespaces will reduce the size of any single tablespace. Until , must be set before InnoDB is initialized, or else MariaDB will fail to start, with an error saying that InnoDB did not find the expected number of undo tablespaces. The files are created in the directory specified by , and are named undoN, N being an integer. The default size of an undo tablespace is 10MB.From , multiple undo tablespaces are enabled by default, and the default is changed to 3 so that the space occupied by possible bursts of undo log records can be reclaimed after is set. Before , must have a non-zero setting for innodb_undo_tablespaces to take effect.

    innodb_use_atomic_writes

    • Description: Implement atomic writes on supported SSD devices. See for other variables affected when this is set.

    • Command line: innodb-use-atomic-writes={0|1}

    • Scope: Global

    • Dynamic: No

    innodb_use_fallocate

    • Description: Preallocate files fast, using operating system functionality. On POSIX systems, posix_fallocate system call is used.

      • Automatically set to 1 when is set - see .

      • See for more information.

    innodb_use_global_flush_log_at_trx_commit

    • Description: Determines whether a user can set the variable . If set to 1, a user cannot reset the value with a SET command, while if set to 1, a user can reset the value of innodb_flush_log_at_trx_commit. XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: innodb-use-global-flush-log-at-trx_commit={0|1}

    innodb_use_mtflush

    • Description: Whether to enable Multi-Threaded Flush operations. For more information, see Fusion.

      • InnoDB's multi-thread flush feature was deprecated in and removed from . In later versions of MariaDB, use system variable instead.

      • See for more information.

    • Command line:

    innodb_use_native_aio

    • Description: For Linux systems only, specified whether to use Linux's asynchronous I/O subsystem. Set to ON by default, it may be changed to 0 at startup if InnoDB detects a problem, or from /, if a 5.11 - 5.15 Linux kernel is detected, to avoid an io-uring bug/incompatibility (). MariaDB-10.6.6/MariaDB-10.7.2 and later also consider 5.15.3+ as a fixed kernel and default to ON. To really benefit from the setting, the files should be opened in O_DIRECT mode (, default from ), to bypass the file system cache. In this way, the reads and writes can be submitted with DMA, using the InnoDB buffer pool directly, and no processor cycles need to be used for copying data.

    • Command line: --innodb-use-native-aio={0|1}

    innodb_use_purge_thread

    • Description: Usually with InnoDB, data changed by a transaction is written to an undo space to permit read consistency, and freed when the transaction is complete. Many, or large, transactions, can cause the main tablespace to grow dramatically, reducing performance. This option, introduced in XtraDB 5.1 and removed for 5.5, allows multiple threads to perform the purging, resulting in slower, but much more stable performance.

    • Command line: --innodb-use-purge-thread=#

    • Scope: Global

    innodb_use_stacktrace

    • Description: If set to ON (OFF is default), a signal handler for SIGUSR2 is installed when the InnoDB server starts. When a long semaphore wait is detected at sync/sync0array.c, a SIGUSR2 signal is sent to the waiting thread and thread that has acquired the RW-latch. For both threads a full stacktrace is produced as well as if possible. XtraDB only. Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

    • Command line: --innodb-use-stacktrace={0|1}

    • Scope: Global

    innodb_use_sys_malloc

    • Description: If set the 1, the default, XtraDB/InnoDB will use the operating system's memory allocator. If set to 0 it will use its own. Deprecated in and removed in along with InnoDB's internal memory allocator.

    • Command line: --innodb-use-sys-malloc={0|1}

    • Scope: Global

    innodb_use_sys_stats_table

    • Description: If set to 1 (0 is default), XtraDB will use the SYS_STATS system table for extra table index statistics. When a table is opened for the first time, statistics will then be loaded from SYS_STATS instead of sampling the index pages. Statistics are designed to be maintained only by running an . Replaced by MySQL 5.6's Persistent Optimizer Statistics.

    • Command line: innodb-use-sys-stats-table={0|1}

    • Scope: Global

    innodb_use_trim

    • Description: Use trim to free up space of compressed blocks.

      • See for more information.

    • Command line: --innodb-use-trim={0|1}

    • Scope: Global

    innodb_version

    • Description: InnoDB version number. From , as the InnoDB implementation in MariaDB has diverged from MySQL, the MariaDB version is instead reported. For example, the InnoDB version reported in (which is based on MySQL 5.6) included encryption and variable-size page compression before MySQL 5.7 introduced them. (based on MySQL 5.7) introduced persistent AUTO_INCREMENT () in a GA release before MySQL 8.0. (based on MySQL 5.7) introduced instant ADD COLUMN () before MySQL.

    • Scope: Global

    • Dynamic: No

    innodb_write_io_threads

    • Description: Prior to , this was simply the number of I/O threads for InnoDB writes. From , asynchronous I/O functionality in the InnoDB Background Thread Pool replaces the old InnoDB I/O Threads. This variable is now multiplied by 256 to determine the maximum number of concurrent asynchronous I/O write requests that can be completed by the Background Thread Pool. The default is therefore 4*256 = 1024 conccurrent asynchronous write requests. You may on rare occasions need to reduce this default on Linux systems running multiple MariaDB servers to avoid exceeding system limits, or increase if spending too much time waiting on I/O requests.

    • Command line: --innodb-write-io-threads=#

    • Scope: Global

    This page is licensed: CC BY-SA / Gnu FDL

    Dynamic: No

  • Data Type: boolean

  • Default Value: OFF

  • estimate The default, and independent of innodb_io_capacity. If the oldest modified age exceeds 1/2 of the maximum age capacity, blocks are flushed every second at a rate determined by the number of modified blocks, LSN progress speed and the average age of all modified blocks.
  • keep_average Attempts to keep the I/O rate constant by using a shorter loop cycle of one tenth of a second. Designed for SSD cards.

  • Command line: --innodb-adaptive-checkpoint=#

  • Scope: Global

  • Dynamic: Yes

  • Data Type: string

  • Default Value: estimate

  • Valid Values: none or 0, reflex or 1, estimate or 2, keep_average or 3

  • Removed: XtraDB 5.5 - replaced with innodb_adaptive_flushing_method

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: ON

  • Data Type: double

  • Default Value: 10.000000

  • Range: 0 to 70

  • Command line: innodb-adaptive-flushing-method=value

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enumeration

  • Default Value: estimate

  • Valid Values: native or 0, estimate or 1, keep_average or 2

  • Removed: - replaced with InnoDB flushing method from MySQL 5.6

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: OFF (>= ), ON (<= )

  • Dynamic: No
  • Data Type: numeric

  • Default Value: 1

  • Range: 1 to 64

  • Data Type: numeric

  • Default Value: 8

  • Range: 1 to 512

  • Dynamic: Yes
  • Data Type: numeric

  • Default Value:

    • 0 (>= )

    • 150000 (<= )

  • Range: 0 to 1000000

  • Introduced:

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Dynamic: No
  • Data Type: numeric

  • Default Value: 8388608

  • Range: 2097152 to 4294967295

  • Deprecated:

  • Removed:

  • Default Value: ON

  • Introduced: MariaDB 10.11.9, , , MariaDB 11.4.3, ,

  • Data Type: numeric

  • Default Value: 5

  • Range: 1 to 1073741824

  • Introduced:

  • Removed:

  • Data Type: boolean

  • Default Value: OFF

  • Introduced:

  • Removed:

  • Data Type: boolean

  • Default Value: OFF

  • Introduced:

  • Removed:

  • Data Type: boolean

  • Default Value: OFF

  • Introduced:

  • Removed:

  • Data Type: numeric

  • Default Value: 0

  • Introduced:

  • Removed:

  • Dynamic: Yes
  • Data Type: numeric

  • Default Value: 64 (from ) 8 (before ),

  • Range: 1 to 1000

  • 1 is the consecutive lock mode.
  • 2 is the interleaved lock mode.

  • In order to use Galera Cluster, the lock mode needs to be set to 2.

  • See AUTO_INCREMENT Handling in InnoDB: AUTO_INCREMENT Lock Modes for more information.

  • Command line: --innodb-autoinc-lock-mode=#

  • Scope: Global

  • Dynamic: No

  • Data Type: numeric

  • Default Value: 1

  • Range: 0 to 2

  • Data Type: numeric

  • Default Value: 3600

  • Range: 1 to 4294967295

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Data Type: boolean

  • Default Value: 0

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Data Type: numeric

  • Default Value: 604800

  • Range: 1 to 4294967295

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Data Type: boolean

  • Default Value: 0

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Dynamic: No

  • Data Type: boolean

  • Default Value: OFF

  • Removed:

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 0

  • Range: 0 to 100

  • Dynamic: No
  • Data Type: numeric

  • Default Value:

    • autosize (0), resulting in innodb_buffer_pool_size/64, if large_pages round down to multiple of largest page size, with 1MiB minimum (from )

    • 134217728 (until )

  • Range:

    • 0, as autosize, and then 1048576 to 18446744073709551615 (from )

    • 1048576 to innodb_buffer_pool_size/innodb_buffer_pool_instances (until )

  • Block size: 1048576

  • Deprecated and ignored from MariaDB 10.11.12, MariaDB 11.4.6, MariaDB 11.8.2

  • Data Type: boolean

  • Default Value: ON

  • Data Type: boolean

  • Default Value: OFF

  • Introduced:

  • Data Type: boolean

  • Default Value:

    • 25

  • Range: 1 to 100

  • Data Type: string

  • Default Value: ""

  • Valid Values: "" or "uncompressed"

  • Data Type: string

  • Default Value: ib_buffer_pool

  • Introduced:

  • Scope: Global

  • Dynamic: No

  • Data Type: numeric

  • Default Value: >= : 8, 1 (>= if innodb_buffer_pool_size < 1GB), or dependent on innodb_buffer_pool_size (Windows 32-bit)

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Data Type: boolean

  • Default Value: OFF

  • Dynamic: No
  • Data Type: boolean

  • Default Value: ON

  • Data Type: boolean

  • Default Value: OFF

  • Data Type: numeric

  • Default Value: 9223372036854775807

  • Range: 1 to 9223372036854775807

  • Dynamic: No

  • Data Type: boolean

  • Default Value: OFF

  • Deprecated:

  • Removed:

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 0

  • Range - 32 bit: 0 to 4294967295

  • Range - 64 bit: 0 to 18446744073709547520

  • Removed: - replaced by innodb_buffer_pool_load_at_startup

  • Data Type: boolean

  • Default Value: ON

  • Removed:

  • Data Type: boolean

  • Default Value: 0

  • Removed:

  • Dynamic: Yes
  • Data Type: numeric

  • Default Value: 134217728 (128MiB)

  • Range:

    • Minimum: 5242880 (5MiB ) for InnoDB Page Size <= 16k otherwise 25165824 (24MiB) for InnoDB Page Size > 16k (for versions less than next line)

    • Minimum: 2MiB InnoDB Page Size = 4k, 3MiB InnoDB Page Size = 8k, 5MiB = 16k, 10MiB = 32k, 20MiB = 64k, (>= , >= , >= , >= , >= , >= )

    • Minimum: 1GiB for > 1 (<= )

    • Maximum: 9223372036854775807 (8192PB) (all versions)

  • Block size: 1048576

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 134217728 (128MiB)

  • Range: 0 to 18446744073701163008

  • Block size: 8388608 (8 MB on 64-bit systems)

  • Introduced: MariaDB 10.11.12, MariaDB 11.4.6, MariaDB 11.8.2

  • Data Type: numeric

  • Default Value: specified by the initial value of innodb_buffer_pool_size, rounded up to the block size of that variable. See the section about buffer pool changes in MariaDB 10.11.12, 11.4.6, and 11.8.2.

  • Range: 0 to 18446744073701163008

  • Block size: 8388608 (8 MB on 64-bit systems)

  • Introduced: MariaDB 10.11.12, MariaDB 11.4.6, MariaDB 11.8.2

  • Default Value: OFF

  • Introduced: , ,

  • Data Type: numeric

  • Default Value: 25

  • Range: 0 to 50

  • Introduced:

  • Deprecated:

  • Removed:

  • Data Type: enumeration (>= ), string (<= )

  • Default Value:

    • = , MariaDB 10.6.7, , : none

    • <= , MariaDB 10.6.6, , :all

  • Valid Values: inserts, none, deletes, purges, changes, all

  • Deprecated:

  • Removed:

  • Scope: Global
  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 0

  • Range: 0 to 2

  • Dynamic: Yes
  • Data Type: numeric

  • Default Value: 0

  • Range: 0 upwards

  • Removed: - replaced with InnoDB flushing method from MySQL 5.6.

  • full_crc32 and strict_full_crc32: From . Permits encryption to be supported over a SPATIAL INDEX, which crc32 does not support. Newly-created data files will carry a flag that indicates that all pages of the file will use a full CRC-32C checksum over the entire page contents (excluding the bytes where the checksum is stored, at the very end of the page). Such files will always use that checksum, no matter what parameter innodb_checksum_algorithm is assigned to. Even if innodb_checksum_algorithm is modified later, the same checksum will continue to be used. A special flag are set in the FSP_SPACE_FLAGS in the first data page to indicate the new format of checksum and encryption/page_compressed. ROW_FORMAT=COMPRESSED tables will only use the old format. These tables do not support new features, such as larger innodb_page_size or instant ADD/DROP COLUMN. Also cleans up the MariaDB tablespace flags - flags are reserved to store the page_compressed compression algorithm, and to store the compressed payload length, so that checksum can be computed over the compressed (and possibly encrypted) stream and can be validated without decrypting or decompressing the page. In the full_crc32 format, there no longer are separate before-encryption and after-encryption checksums for pages. The single checksum is computed on the page contents that is written to the file.See MDEV-12026 for details.

  • none: Writes a constant rather than calculate a checksum. Deprecated in , , and removed in MariaDB 10.6 as was mostly used to disable the original, slow, page checksum for benchmarking purposes.

  • strict_crc32, strict_innodb and strict_none: The options are the same as the regular options, but InnoDB will halt if it comes across a mix of checksum values. These are faster, as both new and old checksum values are not required, but can only be used when setting up tablespaces for the first time.

  • Command line: --innodb-checksum-algorithm=#

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enumeration

  • Default Value:

    • full_crc32 (>= )

    • crc32 (>= to <= )

    • innodb (<= )

  • Valid Values:

    • = MariaDB 10.6.0: crc32, full_crc32, strict_crc32, strict_full_crc32

    • , >= : innodb, crc32, full_crc32, none, strict_innodb, strict_crc32, strict_none, strict_full_crc32

    • <= : innodb, crc32, none, strict_innodb, strict_crc32, strict_none

  • ,
    --skip-innodb-checksums
  • Scope: Global

  • Dynamic: No

  • Data Type: boolean

  • Default Value: ON

  • Deprecated:

  • Removed:

  • Dynamic: Yes

  • Data Type: enum

  • Default Value:

    • deprecated

  • Valid Values:

    • deprecated, high_checkpoint, legacy

  • Deprecated:

  • Removed:

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: OFF

  • Introduced:

  • Data Type: numeric

  • Default Value: 0

  • Range: 0 to 1000

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • lz4: Pages are compressed using the lz4 compression algorithm.
  • lzo: Pages are compressed using the lzo compression algorithm.

  • lzma: Pages are compressed using the lzma compression algorithm.

  • bzip2: Pages are compressed using the bzip2 compression algorithm.

  • snappy: Pages are compressed using the snappy algorithm.

  • On many distributions, MariaDB may not support all page compression algorithms by default. From , libraries can be installed as a plugin. See Compression Plugins.

  • See InnoDB Page Compression: Configuring the InnoDB Page Compression Algorithm for more information.

  • Command line: --innodb-compression-algorithm=value

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enum

  • Default Value: zlib

  • Valid Values:none, zlib, lz4, lzo, lzma, bzip2 or snappy

  • --innodb-compression-default={0|1}
  • Scope: Global, Session

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: OFF

  • Scope: Global

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 5

  • Range: 0 to 100

  • Introduced:

  • 1
    is the fastest and
    9
    is the most compact.
  • See InnoDB Page Compression: Configuring the Default Compression Level for more information.

  • Command line: --innodb-compression-level=#

  • Scope: Global

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 6

  • Range: 1 to 9

  • Introduced:

  • Scope: Global

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 50

  • Range: 0 to 75

  • Introduced:

  • Data Type: numeric

  • Default Value:

    • 0 (>= )

    • 5000 (<= )

  • Range: 1 to 18446744073709551615

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • If set to salvage, read access is permitted, but corrupted pages are ignored. innodb_file_per_table must be enabled for this option. Previously named innodb_pass_corrupt_table.

  • Added as a deprecated and ignored option in (which uses InnoDB as default instead of XtraDB) to allow for easier upgrades.

  • Command line: innodb-corrupt-table-action=value

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enumeration

  • Default Value:

    • assert (<= )

    • deprecated (<= )

  • Valid Values:

    • deprecated, assert, warn, salvage

  • Deprecated:

  • Removed:

  • Scope: Global

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: OFF

  • Introduced:

  • Dynamic: No

  • Data Type: numeric

  • Default Value: ibdata1:12M:autoextend (from ), ibdata1:10M:autoextend (before )

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: OFF

  • Introduced:

  • Data Type: directory name

  • Default Value: The MariaDB data directory

  • Data Type: boolean

  • Default Value: 1

  • full: Default. Report transactions, waiting locks and blocking locks.
  • Command line: --innodb-deadlock-report=val

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enum

  • Default Value: full

  • Valid Values: off, basic, full

  • Introduced: MariaDB 10.6.0

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 1

  • Range: 1 to 255

  • Introduced:

  • Removed:

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 1

  • Range: 1 to 4294967295

  • Scope: Global
  • Dynamic: Yes

  • Data Type: enum

  • Default Value: dynamic

  • Valid Values: redundant, compact or dynamic

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: OFF

  • Deprecated:

  • Removed:

  • Data Type: double

  • Default Value: 0.9

  • Range: 0.7 to 1

  • Deprecated:

  • Removed:

  • Dynamic: Yes
  • Data Type: numeric

  • Default Value: 20

  • Range: 1 to 100

  • Deprecated:

  • Removed:

  • Dynamic: Yes
  • Data Type: integer

  • Default Value: 40

  • Range: 1 to 1000

  • Deprecated:

  • Removed:

  • Data Type: numeric

  • Default Value: 7

  • Range: 2 to 32

  • Deprecated:

  • Removed:

  • Data Type: numeric

  • Default Value: 0

  • Range: 0 to 4294967295

  • Deprecated:

  • Removed:

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 0

  • Default Value - 32 bit: 2147483648

  • Default Value - 64 bit: 9223372036854775807

  • Removed: - replaced by MySQL 5.6's table_definition_cache implementation.

  • Dynamic: Yes
  • Data Type: boolean

  • Default Value: OFF

  • Data Type: boolean
  • Default Value: OFF

  • Removed: , , , MariaDB 10.6.8,

  • The setting innodb_doublewrite=fast could be safe when the doublewrite buffer (the first file of the system tablespace) and the data files reside in the same file system.

  • Command line: --innodb-doublewrite{=val}, --skip-innodb-doublewrite

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enum

  • Default Value: ON

  • Valid Values: OFF, ON, fast

    • Description: If set to 1, the default, to improve fault tolerance InnoDB first stores data to a doublewrite buffer before writing it to data file. Disabling will provide a marginal performance improvement.

    • Command line: --innodb-doublewrite, --skip-innodb-doublewrite

    • Scope: Global

    • Dynamic:No

    • Data Type: boolean

    • Default Value: ON

    Dynamic: No
  • Data Type: filename

  • Default Value: NULL

  • Removed:

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enum

  • Default Value:

    • deprecated

  • Valid Values:

    • deprecated, backoff, legacy

  • Deprecated:

  • Removed:

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 0

  • Range: 0 to 1

  • Removed: Not needed after XtraDB 1.0.5

  • Scope: Global
  • Dynamic: No

  • Data Type: boolean

  • Default Value: OFF

  • , but allows unencrypted tables to be created.
  • FORCE - Enables table encryption for all new and existing tables that have the ENCRYPTED table option set to DEFAULT, and doesn't allow unencrypted tables to be created (CREATE TABLE ... ENCRYPTED=NO will fail).

  • See Data-at-Rest Encryption and Enabling InnoDB Encryption: Enabling Encryption for Automatically Encrypted Tablespaces for more information.

  • Command line: --innodb-encrypt-tables={0|1}

  • Scope: Global

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: OFF

  • Valid Values: ON, OFF, FORCE

  • Scope: Global
  • Dynamic: No

  • Data Type: boolean

  • Default Value: OFF

  • Valid Values: ON, OFF

  • Introduced: , ,

  • Scope: Global

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 1

  • Range: 0 to 4294967295

  • Scope: Global
  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 100

  • Range: 0 to 4294967295

  • Scope: Global

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 0

  • Range:

    • 0 to 4294967295 (<= , , , , )

    • 0 to 255 (>= , , , , )

  • Dynamic: No
  • Data Type: numeric

  • Default Value: 0

  • Range: 0 to 126

  • Removed: XtraDB 5.5 - replaced by innodb_rollback_segments

  • Dynamic: No

  • Data Type: boolean

  • Default Value: OFF

  • Removed: XtraDB 5.5

  • Scope: Global, Session
  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: OFF

  • Deprecated:

  • Removed:

  • Scope: Global

  • Dynamic: No

  • Data Type: boolean

  • Default Value: OFF

  • Removed: /XtraDB 5.6 - replaced with innodb_checksum_algorithm

  • 2, the InnoDB redo log is flushed and a cold shutdown takes place, similar to a crash. The resulting startup then performs crash recovery. Extremely fast, in cases of emergency, but risks corruption. Not suitable for upgrades between major versions!

  • 3 (from ) - active transactions will not be rolled back, but all changed pages are written to data files. The active transactions are rolled back by a background thread on a subsequent startup. The fastest option that will not involve InnoDB redo log apply on subsequent startup. See MDEV-15832.

  • Command line: --innodb-fast-shutdown[=#]

  • Scope: Global

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 1

  • Range: 0 to 3 (>= ), 0 to 2 (<= )

  • Data Type: numeric

  • Default Value: 600

  • Range: 1 to 4294967295

  • Dynamic: Yes

  • Data Type: string

  • Default Value:

    • Barracuda

  • Valid Values: Antelope, Barracuda

  • Deprecated:

  • Removed:

  • Re-introduced: (for compatibility purposes)

  • Removed: MariaDB 10.6.0

  • Dynamic: No

  • Data Type: boolean

  • Default Value: ON

  • Deprecated:

  • Removed:

  • Dynamic: Yes
  • Data Type: string

  • Default Value: Antelope

  • Valid Values: Antelope, Barracuda

  • Deprecated:

  • Removed:

  • Scope: Global

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: ON

  • Deprecated:

  • Dynamic: Yes
  • Data Type: numeric

  • Default Value: 100

  • Range: 10 to 100

  • Default Value: 1

  • Range: 0 to 2700

  • once a second. This gives better performance, but a server crash can erase the last second of transactions.
  • 2 The log buffer is written to the InnoDB redo log after each commit, but flushing takes place every innodb_flush_log_at_timeout seconds (by default once a second). Performance is slightly better, but a OS or power outage can cause the last second's transactions to be lost.

  • 3 Emulates group commit (3 syncs per group commit). See Binlog group commit and innodb_flush_log_at_trx_commit. This option has not been working correctly since 10.2 and may be removed in future, see 1873

  • Command line: --innodb-flush-log-at-trx-commit[=#]

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enumeration

  • Default Value: 1

  • Valid Values: 0, 1, 2 or 3

  • - O_DIRECT or directio(), is used to open data files, and fsync() to flush data and logs. Default on Unix from
    .
  • fsync - Default on Unix until . Can be specified directly, but if the variable is unset on Unix, fsync() are used by default.

  • O_DIRECT_NO_FSYNC - introduced in . Uses O_DIRECT during flushing I/O, but skips fsync() afterwards. Not suitable for XFS filesystems. Generally not recommended over O_DIRECT, as does not get the benefit of innodb_use_native_aio=ON.

  • ALL_O_DIRECT - introduced in and available with XtraDB only. Uses O_DIRECT for opening both data and logs and fsync() to flush data but not logs. Use with large InnoDB files only, otherwise may cause a performance degradation. Set innodb_log_block_size to 4096 on ext4 filesystems. This is the default log block size on ext4 and will avoid unaligned AIO/DIO warnings.

  • unbuffered - Windows-only default

  • async_unbuffered - Windows-only, alias for unbuffered

  • normal - Windows-only, alias for fsync

  • littlesync - for internal testing only

  • nosync - for internal testing only

  • Deprecated in and replaced by four boolean dynamic variables that can be changed while the server is running: innodb_log_file_buffering (disable O_DIRECT, added by MDEV-28766 in 10.8.4, 10.9.2), innodb_data_file_buffering (disable O_DIRECT on data files), innodb_log_file_write_through (enable O_DSYNC on the log), innodb_data_file_write_through (enable O_DSYNC on persistent data files) From , if set to one of the following values, then the values of the four boolean flags are set as follows:

    • O_DSYNC: innodb_log_file_write_through=ON, innodb_data_file_write_through=ON,innodb_data_file_buffering=OFF, and (if supported) innodb_log_file_buffering=OFF.

    • fsync, littlesync, nosync, or (Microsoft Windows specific) normal: , , and .

  • Command line: --innodb-flush-method=name

  • Scope: Global

  • Dynamic: No

  • Data Type: enumeration (>= ), string (<= )

  • Default Value:

    • O_DIRECT (Unix, >= MariaDB 10.6.0)

    • fsync (Unix, >= , <= )

    • Not set (<= )

  • Valid Values:

    • Unix: fsync, O_DSYNC, littlesync, nosync. O_DIRECT, O_DIRECT_NO_FSYNC

    • Windows: unbuffered, async_unbuffered, normal

  • Deprecated:

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enumeration

  • Default Value: area

  • Valid Values: none or 0, area or 1, cont or 2

  • Removed: /XtraDB 5.6 - replaced by innodb_flush_neighbors

  • 0: No other dirty pages are flushed.
  • 2: Flushes dirty pages in the same extent from the buffer pool.

  • Command line: --innodb-flush-neighbors=#

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enumeration

  • Default Value: 1

  • Valid Values: 0, 1, 2

  • Data Type: boolean

  • Default Value: ON

  • Dynamic: Yes
  • Data Type: numeric

  • Default Value: 30

  • Range: 1 to 1000

  • Scope: Global

  • Dynamic: No

  • Data Type: boolean

  • Default Value: OFF

  • Removed: MariaDB 10.6.6

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: OFF

  • Dynamic: No
  • Data Type: enumeration

  • Default Value: 0

  • Range: 0 to 6

  • sync_preflush
    - thread issues a flush list batch, and waits for it to complete. This is the same as is used when the page cleaner thread is not running.
  • Command line: innodb-foreground-preflush=value

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enum

  • Default Value:

    • deprecated

  • Valid Values:

    • deprecated, exponential_backoff, sync_preflush

  • Deprecated:

  • Removed:

  • Dynamic: Yes

  • Data Type: string

  • Data Type: numeric

  • Default Value: 8000000

  • Data Type: boolean

  • Default Value: OFF

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: ON

  • Data Type: numeric

  • Default Value: 84

  • Range: 10 to 84

  • Dynamic: No
  • Data Type: numeric

  • Default Value: 3

  • Range: 1 to 16

  • Data Type: numeric

  • Default Value: 2000

  • Range: 1000 to 10000

  • Data Type: numeric

  • Default Value: 2000000000

  • Range: 1000000 to 18446744073709551615

  • Dynamic: Yes
  • Data Type: string

  • Default Value: Empty

  • Data Type: numeric

  • Default Value: 2

  • Range: 1 to 32

  • Data Type: numeric

  • Default Value: 640000000

  • Range: 32000000 to 1600000000

  • Introduced:

  • Dynamic: Yes
  • Data Type: string

  • Default Value: Empty

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 100

  • Range: 100 to 999999999

  • Removed:

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 1

  • Range: 0 to 1

  • Removed:

  • Dynamic: No

  • Data Type: numeric

  • Default Value: 1/2 the size of the InnoDB buffer pool

  • Range: 0 to 1/2 the size of the InnoDB buffer pool

  • Removed:

  • Dynamic: Yes
  • Data Type: numeric

  • Default Value: 100

  • Range: 0 to 100

  • Deprecated: , ,

  • Removed:

  • Data Type: boolean

  • Default Value: OFF

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 0

  • Range: 0 to 1

  • Removed:

  • , or to be able to export tables to older versions of the server.
  • This variable has been introduced as a result, with the following values:

  • never (0): Do not allow instant add/drop/reorder, to maintain format compatibility with MariaDB 10.x and MySQL 5.x. If the table (or partition) is not in the canonical format, then any ALTER TABLE (even one that does not involve instant column operations) will force a table rebuild.

  • add_last (1, default in 10.3): Store a hidden metadata record that allows columns to be appended to the table instantly (MDEV-11369). In 10.4 or later, if the table (or partition) is not in this format, then any ALTER TABLE (even one that does not involve column changes) will force a table rebuild.

  • add_drop_reorder (2, default): From only. Like 'add_last', but allow the metadata record to store a column map, to support instant add/drop/reorder of columns.

  • Command line: --innodb-instant-alter-column-allowed=value

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enum

  • Valid Values:

    • <= : never, add_last

    • = : never, add_last, add_drop_reorder

  • Default Value:

    • <= : add_last

    • = : add_drop_reorder

  • Introduced: , ,

  • Data Type: boolean

  • Default Value: OFF

  • Deprecated: (treated as if OFF)

  • Removed:

  • Scope: Global

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 200

  • Range: 100 to 18446744073709551615 (264-1)

  • Data Type: numeric

  • Default Value: 2000 or twice innodb_io_capacity, whichever is higher.

  • Range : 100 to 18446744073709551615 (264-1)

  • numeric
  • Default Value: 0

  • Range: 0 to 9223372036854775807

  • Deprecated:

  • Removed:

  • Command line: --innodb-large-prefix

  • Scope: Global

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value:

    • ON

  • Deprecated:

  • Removed:

  • Re-introduced: (for compatibility purposes)

  • Removed: MariaDB 10.6.0

  • Scope: Global
  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: 0

  • Deprecated: XtraDB 5.5.30-30.2

  • Removed:

  • Command line: --innodb-lock-schedule-algorithm=#

  • Scope: Global

  • Dynamic: No (>= , ), Yes (<= , )

  • Data Type: enum

  • Valid Values: FCFS, VATS

  • Default Value: FCFS (, ), VATS (), FCFS ()

  • Deprecated: , , ,

  • Removed: MariaDB 10.6.0

  • Scope: Global, Session

  • Dynamic: Yes

  • Data Type: INT UNSIGNED (>= MariaDB 10.6.3), BIGINT UNSIGNED (<= MariaDB 10.6.2)

  • Default Value: 50

  • Range:

    • 0 to 100000000 (>= MariaDB 10.6.3)

    • 0 to 1073741824 (>= to <= MariaDB 10.6.2)

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: ON

  • Deprecated:

  • Removed:

  • Dynamic: No

  • Data Type: boolean

  • Default Value: OFF

  • Deprecated:

  • Removed:

  • Data Type: string

  • Default Value: ./

  • Deprecated:

  • Removed:

  • Data Type: numeric

  • Default Value: 0

  • Deprecated:

  • Removed:

  • Data Type: boolean

  • Default Value: OFF

  • Deprecated:

  • Removed:

  • Scope: Global

  • Dynamic: No

  • Data Type: numeric

  • Default Value: 512

  • Deprecated:

  • Removed:

  • Data Type: numeric

  • Default Value: 16777216 (16MB)

  • Range: 262144 to 2147479552 (256KB to 2GB - 4K) (>= MariaDB 10.11.8)

  • Range: 262144 to 18446744073709551615 (<= MariaDB 10.11.7) - limit to the above maximum because this is an operating system limit.

  • Block size: 4096

  • Dynamic: Yes
  • Data Type: boolean

  • Default Value: OFF

  • Introduced: MariaDB 10.11.12, MariaDB 11.4.6, MariaDB 11.8.2

  • crc32 - CRC32© is used for log block checksums, which also permits recent CPUs to use hardware acceleration (on SSE4.2 x86 machines and Power8 or later) for the checksums.

  • strict_* - Whether or not to accept checksums from other algorithms. If strict mode is used, checksums blocks are considered corrupt if they don't match the specified algorithm. Normally they are considered corrupt only if no other algorithm matches.

  • Command line: innodb-log-checksum-algorithm=value

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enum

  • Default Value:

    • deprecated (>= )

    • innodb (<= )

  • Valid Values:

    • deprecated, innodb, none, crc32, strict_none, strict_innodb, strict_crc32 (>= )

    • innodb, none, crc32, strict_none, strict_innodb, strict_crc32 (<= )

  • Deprecated:

  • Removed:

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: ON

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Data Type: boolean

  • Default Value:

    • ON

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Scope: Global

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: OFF

  • Introduced: MariaDB 10.8.4, MariaDB 10.9.2

  • Scope: Global
  • Dynamic: No

  • Data Type: boolean

  • Default Value: ON (Linux, FreeBSD), OFF (Other platforms)

  • Introduced: MariaDB 10.11.10, , MariaDB 11.4.4, ,

  • Dynamic: Yes (>= ), No (<= )

  • Data Type: numeric

  • Default Value: 100663296 (96MB) (>= ), 50331648 (48MB) (<= )

  • Range:

    • = : 4194304 to 512GB (4MB to 512GB)

    • <= : 1048576 to 512GB (1MB to 512GB)

  • Block size: 4096

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: OFF

  • Introduced:

  • Data Type: numeric

  • Default Value: 1 (>= ), 2 (<= )

  • Range: 1 to 100 (>= )

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Data Type: directory name

    Dynamic: Yes
  • Data Type: boolean

  • Default Value:

    • OFF (>= , , , )

    • ON (<= , , , )

  • Introduced: ,

  • Deprecated: , , ,

  • Removed: MariaDB 10.6.0

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 0

  • Range: 0 to 6000

  • Introduced: MariaDB 10.11.8, , , , MariaDB 11.4.2,

  • Dynamic: No (>= MariaDB 10.11.9), Yes (<= )
  • Data Type: numeric

  • Default Value: 512 (>= MariaDB 10.11.9), 8192 (<= )

  • Range:

    • 512 to 4096 (>= MariaDB 10.11.9)

    • 512 to innodb_page_size (<= )

  • Removed:

  • Re-introduced: MariaDB 10.11.9, , , MariaDB 11.4.3,

  • Dynamic: Yes
  • Data Type: numeric

  • Default Value:

    • 0 (>=MariaDB 10.6.20, MariaDB 10.11.10, , MariaDB 11.4.4. , )

    • 32 (<= MariaDB 10.6.19, MariaDB 10.11.9, , MariaDB 11.4.3)

  • Range: 1 to 18446744073709551615

  • Introduced:

  • Deprecated: MariaDB 10.6.20, MariaDB 10.11.10, and MariaDB 11.4.4. ,

  • Scope: Global
  • Dynamic: Yes

  • Data Type: numeric

  • Default Value:

    • 1536 (>= )

    • 1024 (<= )

  • Range - 32bit: 100 to 2^32-1

  • Range - 64bit: 100 to 2^64-1

  • Scope: Global

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 4096 (4KB)

  • Range: 4096 (4KB) to 18446744073709551615 (16EB)

  • Deprecated:

  • Scope: Global

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 1000000

  • Range: 0 to 18446744073709551615

  • Deprecated:

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value:

    • 90.000000 (>= )

    • 75.000000 (<= )

  • Range: 0 to 99.999

  • Scope: Global
  • Dynamic: Yes

  • Data Type: numeric

  • Default Value:

    • 0 (>= )

    • 0.001 (<= )

  • Range: 0 to 99.999

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 0

  • Range: 0 to 4294967295

  • Data Type: numeric

  • Default Value: 0

  • Data Type: numeric

  • Default Value: 4294967295

  • Range: 0 to 4294967295

  • Introduced: , , ,

  • Data Type: numeric

  • Default Value:

    • 10485760

  • Range: 10485760 to 18446744073709551615

  • Data Type: numeric

  • Default Value: 1048576 (1M)

  • Range: 1048576 (1M) to 1073741824 (1G)

  • Removed: - replaced by innodb_sort_buffer_size

  • Command line:
    --innodb-mtflush-threads=#
  • Scope: Global

  • Dynamic: No

  • Data Type: numeric

  • Default Value: 8

  • Range: 1 to 64

  • Deprecated:

  • Removed:

  • Data Type: string

    Data Type: string

    Data Type: string

    Data Type: string

    Data Type: boolean

  • Default Value: OFF

  • Removed: , ,

  • Data Type: numeric

  • Default Value: 37

  • Range: 5 to 95

  • Data Type: numeric

  • Default Value: 1000

  • Range: 0 to 2^32-1

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 134217728

  • Range: 65536 to 2^64-1

  • Dynamic: No

  • Data Type: numeric

  • Default Value: autosized

  • Range: 10 to 4294967295

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: OFF

  • --innodb-page-cleaners=#
  • Scope: Global

  • Dynamic: Yes (>= ), No (<= )

  • Data Type: numeric

  • Default Value: 4 (or set to innodb_buffer_pool_instances if lower)

  • Range: 1 to 64

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • InnoDB's page size can be as large as 64k for tables using the following row formats: DYNAMIC, COMPACT, and REDUNDANT.
  • InnoDB's page size must still be 16k or less for tables using the COMPRESSED row format.

  • This system variable's value cannot be changed after the datadir has been initialized. InnoDB's page size is set when a MariaDB instance starts, and it remains constant afterwards.

  • Command line: --innodb-page-size=#

  • Scope: Global

  • Dynamic: No

  • Data Type: enumeration

  • Default Value: 16384

  • Valid Values: 4k or 4096, 8k or 8192, 16k or 16384, 32k and 64k.

  • Data Type: boolean

  • Default Value: OFF

  • Deprecated:

  • Dynamic: Yes
  • Data Type: boolean

  • Default Value: OFF

  • Data Type: numeric

  • Default Value:

    • 127 (>= MariaDB 10.6.20, MariaDB 10.11.10, , MariaDB 11.4.4, )

    • 1000 (>= MariaDB 10.6.16, , MariaDB 10.11.6, , )

    • 300 (<= , , , , )

  • Range: 1 to 5000

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 128

  • Range: 1 to 128

  • Deprecated: MariaDB 10.6.16, , MariaDB 10.11.6, , ,

  • Dynamic: No

  • Data Type: numeric

  • Default Value: 4

  • Range: 1 to 32

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: OFF

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enumeration

  • Default Value: linear

  • Valid Values: none, random, linear, both

  • Removed: /XtraDB 5.6 - replaced by MySQL 5.6's innodb_random_read_ahead

  • Data Type: numeric

  • Default Value: 56

  • Range: 0 to 64

  • Dynamic: Yes (>= MariaDB 10.11), No (<= )

  • Data Type: numeric

  • Default Value: 4

  • Range: 1 to 64

  • Dynamic: No
  • Data Type: boolean

  • Default Value: OFF

  • Dynamic: Yes
  • Data Type: boolean

  • Default Value: OFF (>= MariaDB 10.6.6), ON (<= MariaDB 10.6.5)

  • Introduced: MariaDB 10.6.0

  • Dynamic: No
  • Data Type: boolean

  • Default Value: OFF

  • Removed:

  • Scope: Global
  • Dynamic: No

  • Data Type: boolean

  • Default Value: OFF

  • Removed: - replaced by MySQL 5.6's relay-log-recovery

  • Data Type: numeric

  • Default Value: 0

  • Range: 0 to 4294967295

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Data Type: boolean

  • Default Value: 0

  • Dynamic: Yes
  • Data Type: numeric

  • Default Value: 128

  • Range: 1 to 128

  • Deprecated:

  • Removed:

  • Dynamic: No
  • Data Type: boolean

  • Default Value: ON

  • Introduced:

  • Removed:

  • Dynamic: No
  • Data Type: boolean

  • Default Value: OFF

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Data Type: numeric

  • Default Value: 56

  • Range: 0 to 50000

  • Introduced:

  • Removed:

  • Data Type: numeric

  • Default Value: 256

  • Range: 1 to 50000

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Command line: innodb-sched-priority-cleaner=#

  • Scope: Global

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 19

  • Range: 0 to 39

  • Deprecated:

  • Removed:

  • Data Type: numeric

  • Default Value: 10

  • Range: 0 to 1000

  • Deprecated:

  • Removed:

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 0

  • Range: 0 to 1

  • Deprecated:

  • Removed:

  • Data Type: numeric
  • Default Value: 0

  • Range: 0 to 99

  • Dynamic: Yes
  • Data Type: boolean

  • Default Value: ON (>= ), OFF (<= )

  • Introduced: MariaDB 10.6.18, MariaDB 10.11.8, , , , MariaDB 11.4.2

  • Dynamic: No
  • Data Type: numeric

  • Default Value: 1048576 (1M)

  • Range: 65536 to 67108864

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 4 (>= ), 6 (<= )

  • Range: 0 to 4294967295

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: ON

  • Data Type: boolean
  • Default Value: 1

  • Removed: - replaced by innodb_stats_auto_recalc.

  • Default Value: OFF

  • nulls_ignored: Ignore NULLs altogether from index group calculations.

  • See also Index Statistics, aria_stats_method and myisam_stats_method.

  • Command line: --innodb-stats-method=name

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enumeration

  • Default Value: nulls_equal

  • Valid Values: nulls_equal, nulls_unequal, nulls_ignored

  • Data Type: numeric

  • Default Value: 0

  • Range: 0 to 18446744073709551615

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: OFF

  • Dynamic: Yes
  • Data Type: boolean

  • Default Value: ON

  • Data Type: numeric

  • Default Value: 20

  • If persistent statistics are enabled, then the innodb_stats_persistent_sample_pages system variable applies instead. persistent statistics are enabled with the innodb_stats_persistent system variable.

  • This system variable has been deprecated. The innodb_stats_transient_sample_pages system variable should be used instead.

  • Command line: --innodb-stats-sample-pages=#

  • Scope: Global

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 8

  • Range: 1 to 2^64-1

  • Deprecated:

  • Removed:

  • This system variable does not affect the calculation of persistent statistics.
  • Command line: --innodb-stats-traditional={0|1}

  • Scope: Global

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: ON

  • If persistent statistics are enabled, then the innodb_stats_persistent_sample_pages system variable applies instead. persistent statistics are enabled with the innodb_stats_persistent system variable.

  • Command line: --innodb-stats-transient-sample-pages=#

  • Scope: Global

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 8

  • Range: 1 to 2^64-1

  • Data Type: boolean

  • Default Value: 1

  • Removed: /XtraDB 5.6

  • Data Type: boolean

  • Default Value: OFF

  • Data Type: boolean

  • Default Value: OFF

  • Dynamic: Yes
  • Data Type: boolean

  • Default Value: ON

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: ON

  • Deprecated:

  • Removed:

  • Data Type: numeric

  • Default Value: 1

  • Range: 1 to 1024

  • Removed: MariaDB 10.6.0

  • Data Type: numeric

  • Default Value: 30

  • Range: 0 to 4294967295

  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: ON

  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 0

  • Range: 0 to 1000

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Dynamic: No
  • Data Type: boolean

  • Default Value: OFF

  • Removed: /XtraDB 5.6

  • Dynamic: Yes
  • Data Type: numeric

  • Default Value:

    • 0 (>= .)

    • 10000 (<= )

  • Range: 0 to 1000000

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Scope: Global
  • Dynamic: No

  • Data Type: string

  • Default Value: ibtmp1:12M:autoextend

  • Documentation & examples: innodb-temporary-tablespaces

  • Dynamic: Yes
  • Data Type: string

  • Default Value: Empty

  • Dynamic: No

  • Data Type: boolean

  • Default Value: OFF

  • Deprecated:

  • Data Type: boolean

  • Default Value: OFF

  • Deprecated:

  • Data Type: boolean

  • Default Value: OFF

  • Introduced:

  • Dynamic: No

  • Data Type: string

  • Default Value: NULL

  • Data Type: boolean

  • Default Value: OFF

  • Scope: Global
  • Dynamic: Yes

  • Data Type: numeric

  • Default Value: 128

  • Range: 0 to 128

  • Deprecated:

  • Removed: MariaDB 10.6.0

  • Command line: --innodb-undo-tablespaces=#

  • Scope: Global

  • Dynamic: No

  • Data Type: numeric

  • Default Value: 3 (>= ), 0 (<= MariaDB 10.11)

  • Range: 0, or 2 to 95

  • Data Type: boolean

  • Default Value: ON

  • Command line: innodb-use-fallocate={0|1}
  • Scope: Global

  • Dynamic: No

  • Data Type: boolean

  • Default Value: OFF

  • Deprecated: (treated as if ON)

  • Removed:

  • Scope: Global
  • Dynamic: Yes

  • Data Type: boolean

  • Default Value: ON

  • Deprecated:

  • Removed:

  • --innodb-use-mtflush={0|1}
  • Scope: Global

  • Dynamic: No

  • Data Type: boolean

  • Default Value: OFF

  • Deprecated:

  • Removed:

  • Scope: Global

  • Dynamic: No

  • Data Type: boolean

  • Default Value: ON

  • Dynamic: No
  • Data Type: numeric

  • Default Value: 1

  • Range: 0 to 32

  • Removed: XtraDB 5.5

  • Dynamic: No

  • Data Type: boolean

  • Default Value: OFF

  • Deprecated:

  • Removed:

  • Dynamic: No

  • Data Type: boolean

  • Default Value: ON

  • Deprecated:

  • Removed:

  • Dynamic: No

  • Data Type: boolean

  • Default Value: 0

  • Removed: /XtraDB 5.6

  • Dynamic: No

  • Data Type: boolean

  • Default Value: ON

  • Deprecated:

  • Removed:

  • Data Type: string
  • Removed:

  • Dynamic: Yes (>= MariaDB 10.11), No (<= )

  • Data Type: numeric

  • Default Value: 4

  • Range: 1 to 64

  • InnoDB tables
    Information Schema PLUGINS
    SHOW ENGINES
    plugin-load=innodb=ha_innodb
    innodb_adaptive_flushing_method
    innodb_max_dirty_pages_pct
    innodb_io_capacity
    InnoDB buffer pool
    innodb_adaptive_flushing_lwm
    InnoDB redo log
    innodb_adaptive_flushing
    buffer pool
    innodb_io_capacity
    InnoDB
    MDEV-17492
    DROP TABLE
    TRUNCATE TABLE
    ALTER TABLE
    DROP INDEX
    innodb_adaptive_hash_index_parts
    innodb_thread_sleep_delay
    InnoDB
    innodb_buffer_pool_restore_at_startup
    innodb_buffer_pool_load_at_startup
    innodb_file_per_table
    AUTO_INCREMENT
    innodb_background_scrub_data_check_interval
    Data Scrubbing
    Data Scrubbing
    Data Scrubbing
    Data Scrubbing
    buffer pool
    large-pages
    large-page-size
    Setting Innodb Buffer Pool Size Dynamically
    buffer pool size
    buffer pool
    innodb_buffer_pool_load_at_startup
    buffer pool
    innodb_buffer_pool_load_now
    buffer pool
    buffer pool
    innodb_buffer_pool_dump_at_shutdown
    innodb_buffer_pool_dump_now
    innodb_buffer_pool_size
    InnoDB
    innodb_buffer_pool_size
    innodb_buffer_pool_size
    buffer pool
    innodb_buffer_pool_load_at_startup
    innodb_buffer_pool_load_now
    buffer pool
    innodb_buffer_pool_dump_at_shutdown
    innodb_io_capacity
    buffer pool
    innodb_buffer_pool_dump_now
    innodb_buffer_pool_load_abort=1
    buffer pool
    innodb_buffer_pool_load_at_startup
    InnoDB
    InnoDB Buffer Pool
    Setting InnoDB Buffer Pool Size Dynamically
    innodb_buffer_pool_size_max
    InnoDB Change Buffer
    InnoDB
    InnoDB Change Buffering
    InnoDB Change Buffering
    MariaDB 10.6
    innochecksum
    InnoDB
    innodb_checksum_algorithm
    INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX
    InnoDB page compression
    zlib
    InnoDB page compression
    InnoDB Page Compression: Enabling InnoDB Page Compression by Default
    InnoDB page compression
    innodb_compression_pad_pct_max
    InnoDB Page Compression: Configuring the Failure Threshold and Padding
    InnoDB page compression
    InnoDB page compression
    innodb_compression_failure_threshold_pct
    InnoDB Page Compression: Configuring the Failure Threshold and Padding
    InnoDB
    innodb_thread_concurrency
    innodb_flush_method
    InnoDB
    innodb_data_home_dir
    autoshrink
    innodb_flush_method
    InnoDB
    innodb_file_per_table
    innodb_data_file_path
    innodb_lock_wait_timeout
    innodb_deadlock_detect=ON
    Data-at-Rest Encryption
    InnoDB Encryption Keys
    Data-at-Rest Encryption
    InnoDB Encryption Keys
    row format
    InnoDB Row Formats Overview: Default Row Format
    Defragmenting InnoDB Tablespaces
    innodb_defragment_fill_factor_n_recs
    Defragmenting InnoDB Tablespaces
    innodb_defragment_fill_factor
    Defragmenting InnoDB Tablespaces
    Defragmenting InnoDB Tablespaces
    Defragmenting InnoDB Tablespaces
    Defragmenting InnoDB Tablespaces
    table_definition_cache
    InnoDB
    doublewrite buffer
    innodb_page_size
    innodb_flush_method=NO_FSYNC
    doublewrite buffer
    #1651657
    binary log
    InnoDB redo log
    Data-at-Rest Encryption
    Enabling InnoDB Encryption: Enabling Encryption for the Redo Log
    ENCRYPTED
    ENCRYPTED
    temporary tablespace
    Data-at-Rest Encryption
    InnoDB Enabling Encryption: Enabling Encryption for Temporary Tablespaces
    innodb_encrypt_tables
    Data-at-Rest Encryption
    InnoDB Encryption Keys: Key Rotation
    Data-at-Rest Encryption
    InnoDB Encryption Keys: Key Rotation
    scrubbing
    innodb_encrypt_tables
    Data-at-Rest Encryption
    InnoDB Background Encryption Threads
    innodb_rollback_segments
    undo log
    replication
    innodb_checksum_algorithm
    MDEV-13603
    InnoDB
    InnoDB
    compression
    ALTER TABLE
    XtraDB/InnoDB File Format
    InnoDB
    innodb_file_format_max
    XtraDB/InnoDB File Format
    XtraDB/InnoDB
    innodb_file_format_check
    XtraDB/InnoDB File Format
    InnoDB
    InnoDB file-per-table tablespaces
    InnoDB system tablespace
    Page compression
    ALTER TABLE
    InnoDB redo log
    sync_binlog=1
    innodb_use_global_flush_log_at_trx_commit
    InnoDB redo log
    InnoDB
    MariaDB 10.6.0
    innodb_flush_neighbors
    buffer pool
    information_schema.innodb_tablespaces_scrubbing_table
    innodb_io_capacity
    InnoDB
    MDEV-11412
    MDEV-17567
    MDEV-25506
    MariaDB 10.6.6
    InnoDB
    InnoDB Recovery Modes
    XtraDB redo log
    FULLTEXT index
    INNODB_FT_INDEX_TABLE
    INNODB_FT_INDEX_CACHE
    INNODB_FT_DELETED
    INNODB_FT_BEING_DELETED
    FULLTEXT index
    full-text
    stopwords
    FULLTEXT index
    innodb_ft_user_stopword_table
    innodb_ft_server_stopword_table
    built-in list
    FULLTEXT index
    FULLTEXT index
    character set
    OPTIMIZE TABLE
    FULLTEXT index
    FULLTEXT index
    FULLTEXT index
    VARCHAR
    innodb_ft_enable_stopword
    FULLTEXT index
    innodb_sort_buffer_size
    FULLTEXT index
    FULLTEXT index
    VARCHAR
    innodb_ft_enable_stopword
    innodb_io_capacity
    buffer pool
    Data Scrubbing
    XtraBackup
    innodb_flush_sync
    InnoDB Page Flushing: Configuring the InnoDB I/O Capacity
    innodb_io_capacity
    InnoDB Page Flushing: Configuring the InnoDB I/O Capacity
    row formats
    smaller otherwise
    DYNAMIC
    COMPRESSED
    DROP TABLE
    innodb_file_per_table
    buffer pool
    DROP TABLE
    buffer pool
    MariaDB 10.6.0
    MDEV-16664
    MDEV-16664
    innodb_rollback_on_timeout
    WAIT and NOWAIT
    MariaDB 10.6.3
    innodb_fake_changes
    gap locking
    READ COMMITTED transaction isolation level
    XtraDB redo log
    XtraDB redo log
    XtraDB redo log
    XtraDB redo log
    innodb_flush_method
    InnoDB redo log
    buffer pool
    MariaDB 10.11.12
    MariaDB 11.4.6
    MariaDB 11.8.2
    XtraDB redo log
    InnoDB redo log
    InnoDB redo log
    innodb_flush_trx_log_at_commit=2
    innodb_flush_log_at_trx_commit=2
    innodb_flush_method
    InnoDB redo log
    innodb_buffer_pool_size
    SET GLOBAL
    innodb_log_buffer_size
    innodb_flush_method
    InnoDB redo log
    InnoDB redo log
    innodb_log_files_in_group
    innodb_log_file_size
    InnoDB redo log
    mariadb-backup
    InnoDB redo log
    MariaDB 10.11.9
    MariaDB 10.6.18
    MariaDB 10.11.8
    MariaDB 11.4.2
    InnoDB Page Flushing
    Xtrabackup
    XtraDB redo log
    innodb_track_changed_pages
    innodb_max_changed_pages
    Information Schema INNODB_CHANGED_PAGES table
    innodb_max_bitmap_file_size
    innodb_track_changed_pages
    InnoDB Page Flushing
    innodb_max_dirty_pages_pct
    InnoDB Page Flushing
    innodb_max_purge_lag
    innodb_undo_log_truncate
    innodb_sort_buffer_size
    Fusion-io Multi-threaded Flush
    innodb_page_cleaners
    InnoDB Page Flushing: Page Flushing with Multi-threaded Flush Threads
    INFORMATION_SCHEMA.INNODB_METRICS
    INFORMATION_SCHEMA.INNODB_METRICS
    INFORMATION_SCHEMA.INNODB_METRICS
    INFORMATION_SCHEMA.INNODB_METRICS
    InnoDB buffer pool
    buffer pool
    buffer pool
    innodb_old_blocks_pct
    innodb_sort_buffer_size
    open_files_limit
    innodb_file_per_table
    table_open_cache
    OPTIMIZE TABLE
    FULLTEXT index
    innodb_buffer_pool_instances
    buffer pool
    InnoDB Page Flushing: Page Flushing with Multiple InnoDB Page Cleaner Threads
    maximum row size
    innodb_corrupt_table_action
    error log
    InnoDB undo log
    innodb_purge_threads
    innodb_undo_log_truncate
    innodb_undo_log_truncate=ON
    MariaDB 10.6.16
    MariaDB 10.11.6
    MDEV-32050
    innodb_purge_batch_size
    innodb_read_ahead_threshold
    innodb_random_read_ahead
    MariaDB 10.6.6
    ROW_FORMAT=COMPRESSED
    innodb_thread_concurrency
    innodb_lock_wait_timeout
    undo log
    innodb_undo_logs
    TRUNCATE TABLE
    mariadb-backup
    InnoDB redo log
    Data Scrubbing
    MDEV-13019
    MDEV-18370
    innodb_encrypt_log
    innodb_log_file_size
    Data Scrubbing
    innodb_scrub_log_speed
    InnoDB redo log
    InnoDB redo log
    Data Scrubbing
    SHOW ENGINE INNODB STATUS
    innodb_status_output_locks
    INFORMATION_SCHEMA.INNODB_METRICS
    REPEATABLE READ
    CREATE TABLE
    ALTER TABLE
    CREATE TABLE
    innodb_stats_persistent
    innodb_stats_persistent_sample_pages
    InnoDB Persistent Statistics
    ANALYZE TABLE
    innodb_stats_auto_recalc
    SHOW INDEX
    SHOW TABLE STATUS
    ANALYZE TABLE
    CREATE TABLE
    InnoDB Persistent Statistics
    ANALYZE TABLE
    InnoDB Persistent Statistics
    innodb_stats_traditional
    innodb_stats_traditional
    innodb_stats_transient_sample_pages
    innodb_stats_traditional
    innodb_stats_traditional
    innodb_stats_traditional
    innodb_stats_traditional
    SHOW TABLE STATUS
    InnoDB monitor
    error log
    InnoDB lock monitor
    error log
    SHOW ENGINE INNODB STATUS
    innodb_status_output=ON
    InnoDB Strict Mode
    XA transactions
    binary log
    replication
    autocommit
    LOCK TABLE
    InnoDB
    autoshrink
    tmpdir
    Xtrabackup
    XtraDB redo log
    innodb_max_changed_pages
    innodb_max_bitmap_file_size
    undo logs
    datadir
    innodb_undo_logs
    innodb_undo_tablespaces
    innodb_undo_tablespaces
    innodb_max_undo_log_size
    innodb_purge_rseg_truncate_frequency
    undo logs
    Innodb_available_undo_logs
    innodb_undo_directory
    innodb_undo_tablespaces
    innodb_rollback_segments
    Information Schema XTRADB_RSEG Table
    undo logs
    MariaDB 10.11.1
    innodb_undo_directory
    innodb_undo_log_truncate
    MariaDB 10.6
    innodb_undo_logs
    atomic write support
    innodb_use_atomic_writes
    FusionIO DirectFS atomic write support
    InnoDB Page Compression: Saving Storage Space with Sparse Files
    innodb_flush_log_at_trx_commit
    innodb_page_cleaners
    InnoDB Page Flushing: Page Flushing with Multi-threaded Flush Threads
    MariaDB 10.6.5
    MDEV-26674
    innodb_flush_method=O_DIRECT
    MariaDB 10.6
    ANALYZE TABLE
    InnoDB Page Compression: Saving Storage Space with Sparse Files
    MDEV-6076
    MDEV-11369
    Galera
    InnoDB redo log
    MariaDB 10.6.0
    MDEV-19916
    InnoDB Page Size
    InnoDB Page Size
    InnoDB Page Size
    MariaDB 10.6.6
    innodb_buffer_pool_instances
    innodb_log_file_write_through=OFF
    innodb_data_file_write_through=OFF
    innodb_data_file_buffering=ON
    MariaDB 10.6.15
    MariaDB 10.11.5
    MariaDB 5.3
    MariaDB 10.4
    MariaDB 10.4
    MariaDB 10.0.14
    MariaDB 10.0.14
    MariaDB 10.0.14
    MariaDB 10.0.14
    MariaDB 10.0.14
    MariaDB 10.0.14
    MariaDB 10.1.8
    MariaDB 10.5.2
    MariaDB 10.5.2
    MariaDB 10.2.1
    MariaDB 10.2.2
    MariaDB 10.1
    MariaDB 10.2.2
    ColumnStore
    ColumnStore
    ColumnStore
    MariaDB 10.1.48
    MariaDB 10.2.35
    MariaDB 10.3.26
    MariaDB 10.4.16
    MariaDB 10.5.7
    MariaDB 10.1.47
    MariaDB 10.2.34
    MariaDB 10.3.25
    MariaDB 10.4.15
    MariaDB 10.5.6
    MariaDB 10.2.3
    MariaDB 10.2.4
    MariaDB 10.3
    MariaDB 10.5
    MariaDB 11.5
    MariaDB 10.5.9
    MariaDB 10.4.18
    MariaDB 10.3.28
    MariaDB 10.2.36
    MariaDB 10.4.12
    MariaDB 10.3.22
    MariaDB 10.2.31
    MariaDB 10.1.44
    MariaDB 10.4.8
    MariaDB 10.3.18
    MariaDB 10.2.27
    MariaDB 10.3.6
    MariaDB 10.2.14
    MariaDB 10.1.33
    MariaDB 10.3.3
    MariaDB 10.2.10
    MariaDB 10.1.29
    MariaDB 10.3.2
    MariaDB 10.2.9
    MariaDB 10.1.28
    MariaDB 10.3.1
    MariaDB 10.2.8
    MariaDB 10.1.24
    MariaDB 10.3.0
    MariaDB 10.2.5
    MariaDB 10.1.22
    MariaDB 10.2.4
    MariaDB 10.1.21
    MariaDB 10.2.2
    MariaDB 10.1.17
    MariaDB 10.2.0
    MariaDB 10.1.13
    MariaDB 10.1.10
    MariaDB 10.1.9
    ColumnStore
    ColumnStore
    MariaDB 10.1
    MariaDB 11.0
    MariaDB 10.0
    MariaDB 11.0
    MariaDB 10.5
    MariaDB 5.1
    MariaDB 10.5.15
    MariaDB 10.7.3
    MariaDB 10.8.2
    MariaDB 10.9.0
    MariaDB 11.0.0
    MariaDB 11.0
    MariaDB 5.5
    MariaDB 10.5.14
    MariaDB 10.7.2
    MariaDB 10.8.1
    MariaDB 10.5.15
    MariaDB 10.7.3
    MariaDB 10.8.2
    MariaDB 10.0
    MariaDB 10.2.9
    MariaDB Enterprise Server
    MariaDB 10.5
    MariaDB 10.5
    MariaDB 10.5
    MariaDB 10.9
    MariaDB 10.9
    10.5
    10.5
    MariaDB 10.5
    MariaDB 10.8
    MariaDB 10.8
    MariaDB 10.8.1
    MariaDB 10.8.1
    MariaDB 10.8.1
    MariaDB 10.1
    MariaDB 10.2
    MariaDB 10.3
    MariaDB 10.3
    MariaDB 10.2
    MariaDB 10.1
    MariaDB 10.0
    MariaDB 10.3.7
    MariaDB 10.3.3
    MariaDB 10.3.1
    MariaDB 10.3.0
    MariaDB 10.2.33
    MariaDB 10.2.17
    MariaDB 10.2.15
    MariaDB 10.2.13
    MariaDB 10.2.10
    MariaDB 10.2.8
    MariaDB 10.1.46
    MariaDB 10.1.44
    MariaDB 10.1.39
    MariaDB 10.1.37
    MariaDB 10.1.31
    MariaDB 10.1.26
    MariaDB 10.0.38
    MariaDB 10.0.37
    MariaDB 10.0.35
    MariaDB 10.0.34
    MariaDB 10.0.33
    MariaDB 10.0.32
    MariaDB 10.2.7
    MariaDB 10.2.2
    MariaDB 10.1.24
    MariaDB 10.1.21
    MariaDB 10.1.18
    MariaDB 10.1.17
    MariaDB 10.1.16
    MariaDB 10.1.14
    MariaDB 10.1.12
    MariaDB 10.0.31
    MariaDB 10.0.29
    MariaDB 10.0.28
    MariaDB 10.0.27
    MariaDB 10.0.26
    MariaDB 10.0.25
    MariaDB 10.0.24
    MariaDB 10.0.23
    MariaDB 10.0.22
    MariaDB 10.0.21
    MariaDB 10.0.20
    MariaDB 10.0.18
    MariaDB 10.0.17
    MariaDB 10.0.16
    MariaDB 10.0.15
    MariaDB 10.0.14
    MariaDB 10.0.13
    MariaDB 10.0.11
    MariaDB 10.0.9
    MariaDB 10.0.8
    MariaDB 10.4.8
    MariaDB 10.4.7
    MariaDB 10.5.7
    MariaDB 10.4.16
    MariaDB 10.5.9
    MariaDB 10.4.18
    MariaDB 10.3.2
    MariaDB 10.2.9
    MariaDB 10.3.3
    MariaDB 10.5.9
    MariaDB 10.4.18
    MariaDB 10.5.7
    MariaDB 10.4.16
    MariaDB 10.0.15
    MariaDB 10.0
    MariaDB 10.0.2
    MariaDB 5.1
    MariaDB 10.4.7
    MariaDB 10.4.6
    MariaDB 10.2.0
    MariaDB 10.2.3
    MariaDB 10.5.20
    MariaDB Enterprise Server 11.4.8-5
    MariaDB Enterprise Server 11.8
    MariaDB 10.2.4
    MariaDB 10.2.3
    MariaDB 10.2.0
    release criteria
    MariaDB 5.1
    MariaDB 5.2
    MariaDB 5.1
    MariaDB versus MySQL
    MariaDB 5.2
    MariaDB 10.5
    MariaDB 10.7
    MariaDB 10.7
    MariaDB 10.7
    MariaDB 10.1
    MariaDB 10.2.3
    MariaDB 10.3
    MariaDB 10.2
    MariaDB 10.2.3
    MariaDB 10.1.21
    MariaDB 10.2.4
    MariaDB 10.1.22
    MariaDB 10.1.6
    MariaDB 10.1.5
    MariaDB 10.1.6
    MariaDB 10.1.6
    MariaDB 10.3.2
    MariaDB 10.4
    MariaDB 10.4
    MariaDB 10.4
    MariaDB 10.4
    MariaDB 10.4.3
    MariaDB 10.4.3
    MariaDB 10.4.3
    MariaDB 10.4.3
    MariaDB 10.4.3
    MariaDB 10.4.3
    MariaDB 10.3.8
    MariaDB 10.3.10
    MariaDB 10.3.6
    MariaDB 10.3.18
    MariaDB 10.4.8
    MariaDB 10.4.9
    MariaDB 10.3.17
    MariaDB 10.4.7
    MariaDB 10.4.3
    MariaDB 10.3.17
    MariaDB 10.4.7
    MariaDB 10.4.9
    MariaDB 10.7.5
    MariaDB 10.8.4
    MariaDB 10.9.2
    MariaDB 10.2.2
    MariaDB 10.4.3
    MariaDB 10.2.2
    MariaDB 10.2.2
    MariaDB 10.3.8
    MariaDB 10.3.10
    MariaDB 10.3.6
    10.5.9
    10.5.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.5.9
    MariaDB 10.4.18
    MariaDB 10.3.28
    MariaDB 10.2.36
    MariaDB 10.5.9
    MariaDB 10.4.18
    MariaDB 10.3.28
    MariaDB 10.2.36
    MariaDB 10.1.9
    MariaDB 10.0.17
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.0.17
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.0.22
    MariaDB 10.1.8
    MariaDB 10.0.17
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.1.9
    MariaDB 10.0
    MariaDB 10.0
    MariaDB 10.0.1
    MariaDB 10.0.8
    MariaDB 10.0
    MariaDB 10.5
    MariaDB 10.2.6
    MariaDB 10.5.5
    MariaDB 10.0
    MariaDB 10.2
    MariaDB 10.0
    MariaDB 10.5.2
    MariaDB 10.5.2
    MariaDB 10.5.2
    MariaDB 10.5.2
    MariaDB 10.8.0
    MariaDB 5.5
    MariaDB 10.2
    MariaDB 10.2.2
    MariaDB 10.3
    MariaDB 10.5.1
    MariaDB 10.0.23
    MariaDB 10.2.6
    MariaDB 10.0
    MariaDB 10.9.0
    MariaDB 10.0
    MariaDB 5.5
    MariaDB 10.3.29
    MariaDB 10.4.19
    MariaDB 10.5.10
    MariaDB 10.0
    MariaDB 10.2.6
    MariaDB 10.5.5
    MariaDB 10.5.5
    MariaDB 10.0
    MariaDB 11.0.6
    MariaDB 10.1.24
    MariaDB 10.2.6
    MariaDB 5.5
    MariaDB 10.1
    MariaDB 10.2.6
    MariaDB 10.3
    MariaDB 10.0
    MariaDB 10.3.6
    MariaDB 11.0
    MariaDB 10.0
    MariaDB 10.4.4
    MariaDB 10.5.4
    MariaDB 10.2.6
    MariaDB 10.2.2
    MariaDB 10.2.37
    MariaDB 10.3.28
    MariaDB 10.4.18
    MariaDB 10.5.9
    MariaDB 10.0
    MariaDB 10.4
    MariaDB 10.2.6
    MariaDB 10.2.12
    MariaDB 5.5
    MariaDB 10.1
    MariaDB 10.2.6
    MariaDB 10.0
    MariaDB 10.5
    MariaDB 10.2.6
    MariaDB 10.2.6
    MariaDB 10.2.6
    MariaDB 10.2.6
    MariaDB 10.0.9
    MariaDB 10.2.6
    MariaDB 10.5.0
    MariaDB 10.5.3
    MariaDB 10.8.3
    MariaDB 10.5
    MariaDB 10.9
    MariaDB 10.5.2
    MariaDB 10.5.1
    MariaDB 10.4.16
    MariaDB 10.3.26
    MariaDB 10.2.35
    MariaDB 10.8
    MariaDB 11.0.6
    MariaDB 11.1.5
    MariaDB 11.2.4
    MariaDB 10.2.6
    MariaDB 10.2.6
    MariaDB 10.0
    MariaDB 10.2.6
    MariaDB 10.0
    MariaDB 10.2.2
    MariaDB 10.2.5
    MariaDB 10.2.9
    MariaDB 10.3.2
    MariaDB 10.2.4
    MariaDB 10.2.1
    MariaDB 10.5.1
    MariaDB 10.10
    MariaDB 10.10.7
    MariaDB 11.0.4
    MariaDB 11.1.3
    MariaDB 11.2.2
    MariaDB 5.5
    MariaDB 10.0
    MariaDB 10.5
    MariaDB 10.5
    MariaDB 10.0
    MariaDB 10.5.5
    MariaDB 5.5
    MariaDB 10.0
    MariaDB 10.5
    MariaDB 10.5.2
    MariaDB 10.5.2
    MariaDB 10.2.6
    MariaDB 10.2.6
    MariaDB 10.2.6
    MariaDB 11.6.2
    MariaDB 10.3.5
    MariaDB 10.0
    MariaDB 10.3
    MariaDB 10.5.5
    MariaDB 10.5.5
    MariaDB 10.2.6
    MariaDB 10.2.6
    MariaDB 10.2.2
    MariaDB 10.0
    MariaDB 10.5.0
    MariaDB 11.0
    MariaDB 11.0
    MariaDB 10.2.6
    MariaDB 10.2.9
    MariaDB 10.3.2
    MariaDB 10.7.1
    MariaDB 10.2.6
    MariaDB 10.0
    MariaDB 10.2
    MariaDB 10.3.7
    MariaDB 10.1
    MariaDB 10.2
    MariaDB 10.3
    MariaDB 10.5
    MariaDB 10.5
    MariaDB 10.0
    MariaDB 10.5
    MariaDB 10.4
    MariaDB 10.5.5
    MariaDB 10.5.4
    MariaDB 10.0
    MariaDB 10.5.5
    MariaDB 10.0
    MariaDB 10.2.2
    MariaDB 11.1.6
    MariaDB 11.2.5
    MariaDB 11.5.2
    MariaDB 11.6.1
    MariaDB 10.0
    MariaDB 10.2.4
    MariaDB 10.0
    MariaDB 10.2.4
    MariaDB 10.0
    MariaDB 10.2.4
    MariaDB 10.0
    MariaDB 10.2.4
    MariaDB 10.0
    MariaDB 10.2.4
    MariaDB 10.0
    MariaDB 10.0
    MariaDB 10.5.2
    MariaDB 10.5.2
    MariaDB 10.5.2
    MariaDB 10.5.2
    MariaDB 10.0.0
    MariaDB 10.8.1
    MariaDB 10.8.0
    MariaDB 10.8
    MariaDB 10.7
    MariaDB 10.0
    MariaDB 10.0
    MariaDB 10.0.4
    MariaDB 10.2.2
    MariaDB 10.5.1
    MariaDB 10.0.23
    MariaDB 10.3.0
    MariaDB 10.0
    MariaDB 10.0
    MariaDB 10.0
    MariaDB 10.2.28
    MariaDB 10.3.19
    MariaDB 10.4.9
    MariaDB 10.0
    MariaDB 10.9.0
    MariaDB 11.0.0
    MariaDB 10.3.7
    MariaDB 10.3.6
    MariaDB 10.5.15
    MariaDB 10.7.3
    MariaDB 10.8.2
    MariaDB 10.5.14
    MariaDB 10.7.2
    MariaDB 10.8.1
    MariaDB 10.9.0
    MariaDB 11.0.0
    MariaDB 10.0
    MariaDB 10.4.3
    MariaDB 10.3.29
    MariaDB 10.4.19
    MariaDB 10.5.10
    MariaDB 10.5.0
    MariaDB 10.2.2
    MariaDB 10.4
    MariaDB 10.5
    MariaDB 10.4.3
    MariaDB 10.0
    MariaDB 10.5.0
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.0
    MariaDB 10.5.5
    MariaDB 10.7
    MariaDB 10.0
    MariaDB 10.0
    MariaDB 10.0
    MariaDB 10.5.5
    MariaDB 10.5.4
    MariaDB 10.5.5
    MariaDB 10.2.6
    MariaDB 10.1
    MariaDB 10.2.6
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 11.0.0
    MariaDB 10.0
    MariaDB 10.0
    MariaDB 11.0.0
    MariaDB 10.1.3
    MariaDB 10.1.4
    MariaDB 11.0.1
    MariaDB 11.1.0
    MariaDB 11.0.1
    MariaDB 11.1.0
    MariaDB 11.0.1
    MariaDB 11.1.0
    MariaDB 11.0.1
    MariaDB 11.1.0
    MariaDB 11.0.1
    MariaDB 11.1.0
    MariaDB 11.0.1
    MariaDB 11.1.0
    MariaDB 10.0
    MariaDB 10.3.35
    MariaDB 10.4.25
    MariaDB 10.5.16
    MariaDB 10.7.4
    MariaDB 10.0
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.2.26
    MariaDB 10.3.17
    MariaDB 10.4.7
    MariaDB 10.1.45
    MariaDB 10.2.32
    MariaDB 10.3.23
    MariaDB 10.4.13
    MariaDB 10.5.3
    MariaDB 10.1.46
    MariaDB 10.2.33
    MariaDB 10.3.24
    MariaDB 10.4.14
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.0
    MariaDB 10.3.6
    MariaDB 10.3.6
    MariaDB 10.3.5
    MariaDB 10.2
    MariaDB 10.3.1
    MariaDB 10.4.3
    MariaDB 10.2
    MariaDB 10.3.1
    MariaDB 10.2
    MariaDB 10.3.1
    MariaDB 11.0.1
    MariaDB 5.5
    MariaDB 10.5
    MariaDB 10.0
    MariaDB 5.5
    MariaDB 11.0
    MariaDB 11.0
    MariaDB 10.3.7
    MariaDB 10.3.6
    MariaDB 10.3.7
    MariaDB 10.5
    MariaDB 11.0
    MariaDB 10.0
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.0.9
    MariaDB 10.0
    MariaDB 10.0
    MariaDB 10.0
    MariaDB 10.2.37
    MariaDB 10.3.28
    MariaDB 10.4.18
    MariaDB 10.5.9
    MariaDB 10.0
    MariaDB 10.4
    MariaDB 10.3
    MariaDB 10.4
    MariaDB 10.3
    MariaDB 10.4
    MariaDB 10.3.23
    MariaDB 10.4.13
    MariaDB 10.5.3
    MariaDB 10.2.5
    MariaDB 10.3.0
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.2
    MariaDB 10.3.1
    MariaDB 10.4.3
    MariaDB 10.0.0
    MariaDB 10.2.12
    MariaDB 10.1.30
    MariaDB 10.2.11
    MariaDB 10.1.29
    MariaDB 10.3.9
    MariaDB 10.2.17
    MariaDB 10.2.3
    MariaDB 10.1
    MariaDB 10.5.7
    MariaDB 10.4.16
    MariaDB 10.3.26
    MariaDB 10.2.35
    MariaDB 10.3
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.0
    MariaDB 10.5.0
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.2.6
    MariaDB 10.1
    MariaDB 10.2.6
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.5.0
    MariaDB 10.5.3
    MariaDB 11.2.6
    MariaDB 11.6.2
    MariaDB 11.7.1
    MariaDB 10.9
    MariaDB 10.8
    MariaDB 10.5
    MariaDB 10.4
    MariaDB 10.8.3
    MariaDB 10.8.2
    MariaDB 11.0.0
    MariaDB 10.5
    MariaDB 10.4
    MariaDB 10.2.4
    MariaDB 10.5.2
    MariaDB 10.5.1
    MariaDB 10.4.16
    MariaDB 10.3.26
    MariaDB 10.2.35
    MariaDB 10.5.0
    MariaDB 10.4.15
    MariaDB 10.3.25
    MariaDB 10.2.34
    MariaDB 10.2.17
    MariaDB 10.3.9
    MariaDB 10.5.1
    MariaDB 10.4.16
    MariaDB 10.3.26
    MariaDB 10.2.35
    MariaDB 11.0.6
    MariaDB 11.1.5
    MariaDB 11.2.4
    MariaDB 11.5.1
    MariaDB 10.7
    MariaDB 10.7
    MariaDB 10.7
    MariaDB 10.8
    MariaDB 11.1.6
    MariaDB 11.2.5
    MariaDB 11.5.2
    MariaDB 11.2.6
    MariaDB 11.6.1
    MariaDB 11.7.1
    MariaDB 11.2.5
    MariaDB 10.5.7
    MariaDB 11.2.6
    MariaDB 11.6.1
    MariaDB 11.7.1
    MariaDB 10.5.7
    MariaDB 10.5.6
    MariaDB 10.2.6
    MariaDB 10.2.6
    MariaDB 10.5.7
    MariaDB 10.5.6
    MariaDB 10.2.1
    MariaDB 10.2.0
    MariaDB 10.5.7
    MariaDB 10.4.16
    MariaDB 10.3.26
    MariaDB 10.2.35
    MariaDB 10.0
    MariaDB 10.2.9
    MariaDB 10.3.2
    MariaDB 10.2.23
    MariaDB 10.3.14
    MariaDB 10.4.4
    MariaDB 10.3.3
    MariaDB 10.3.2
    MariaDB 10.5.1
    MariaDB 10.10.1
    MariaDB 11.2.6
    MariaDB 11.6.2
    MariaDB 10.10.7
    MariaDB 11.0.4
    MariaDB 11.1.3
    MariaDB 11.2.2
    MariaDB 10.10.7
    MariaDB 11.0.4
    MariaDB 11.1.3
    MariaDB 11.2.2
    MariaDB 10.0
    MariaDB 10.10
    MariaDB 10.0
    MariaDB 10.0
    MariaDB 10.5.5
    MariaDB 10.0
    MariaDB 10.5.0
    MariaDB 10.2.19
    MariaDB 10.3.0
    MariaDB 10.5.2
    MariaDB 10.1.3
    MariaDB 10.1.4
    MariaDB 10.5.2
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 11.6.2
    MariaDB 11.6.1
    MariaDB 11.0.6
    MariaDB 11.1.5
    MariaDB 11.2.4
    MariaDB 10.3.5
    MariaDB 10.3.4
    MariaDB 10.0
    MariaDB 10.0
    MariaDB 10.5.0
    MariaDB 10.0
    MariaDB 10.2
    MariaDB 10.3.0
    MariaDB 10.5.5
    MariaDB 10.0
    MariaDB 10.5.5
    MariaDB 10.5.4
    MariaDB 10.5.5
    MariaDB 10.2.6
    MariaDB 10.2.6
    MariaDB 11.3.0
    MariaDB 10.5.0
    MariaDB 11.0
    MariaDB 10.2.5
    MariaDB 10.3.0
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.2.9
    MariaDB 10.3.2
    MariaDB 10.2.6
    MariaDB 10.3.0
    MariaDB 10.0
    MariaDB 10.2.2
    MariaDB 10.0
    MariaDB 10.2.4
    MariaDB 10.3.0
    MariaDB 10.10
    MariaDB 10.10
    MariaDB 10.2.42
    MariaDB 10.3.33
    MariaDB 10.4.23
    MariaDB 10.5.14
    MariaDB 10.7.2
    MariaDB 10.7
    MariaDB 10.2.1
    MariaDB 10.4.2
    MariaDB 10.5.4
    MariaDB 10.3.6
    MariaDB 10.1
    MariaDB 10.10.6
    MariaDB 11.0.3
    MariaDB 11.1.2
    MariaDB 11.2.1