An overview of changes, improvements, and what's new in MariaDB Enterprise Server 11.4
MariaDB Enterprise Server 11.4 introduces new features to MariaDB Enterprise. Enhancements include JSON functions for validation and comparison, SQL functions for natural sorting and custom formatting, and a new UUID data type. Operational improvements include an improved cost-based optimizer, a new feature for non-blocking online schema changes, and faster InnoDB imports. Security is strengthened with default SSL encryption, password reuse prevention, and Galera Cluster security improvements. Replication is enhanced with default GTID, binary log size limits, and optimistic ALTER TABLE. Monitoring benefits from JSON histograms, new thread states, and detailed error reporting. These updates aim to improve developer experience, database administration, and overall performance.
This document includes all major features and changes between 10.6 ES and 11.4 ES.
Upgrading from 10.6 should use take a few seconds with Please ensure that you do a of 10.6 before doing the upgrade.
The variable has no effect in 11.4 as all the features are already in 11. The variable is deprecated and should be remove from existing config files.
As long as no new 11.4 features are used (by CREATE TABLE or DML's) and the new variable is 0 (default) one should be able to use a 10.6 slave of 11.4 master.
New JSON function JSON_SCHEMA_VALID to validate a JSON document against a JSON schema, as documented by the JSON Schema Draft 2020. This function can also be used in a CHECK constraint to verify that only JSON documents are stored in the database which include required items and that the values are within a given range, length, etcetera.
New JSON functions JSON_EQUALS() and JSON_NORMALIZE() for easier comparison of two JSON documents and for normalizing JSON objects to be comparable, for example when a unique key based on JSON data is needed.
New function JSON_OVERLAPS()
New function NATURAL_SORT_KEY() which can be used to sort strings naturally.
Example: A string "v10" would be sorted after a string "v9"
New function SFORMAT() for custom formats of strings. The function uses a string including formatting options and a set of given values to generate a custom formatted string.
New data type UUID for more efficient storage of UUIDs
New data type INET4 to store IPv4 addresses as 4-byte binary strings. Benefits of storing IPv4 addresses in the INET4 data type are:
Validation of incorrect values
Comparisons
Stored Functions qualifiers for IN, OUT, INOUT, and IN OUT. The qualifiers are following the syntax already used for stored procedures and take the differences for Oracle into account when using the Oracle compatibility mode (sql_mode=ORACLE).
MariaDB Enterprise Server now supports descending indexes. Composite indexes can be used with differently ordered columns to get a significant performance boost in the corresponding ORDER BY use cases.
New, Detailed Replication Lag Representation
New Information Schema Table For Password Related Data
GTID binlog events now include the thread ID
Automatic SST user account management for Galera
New server internal which makes all schema changes (ALTER TABLE commands) non-blocking.
For instant ALTER TABLE operations (e.g., where ALGORITHM=INSTANT is used) OSC is not needed. However, for all other
CONVERT PARTITION and CONVERT TABLE used with ALTER TABLE can be used to convert a partition into a table or vise versa
Exchange a Partition or Convert a Table Without Validation
The process of exchanging a partition with a table or converting a table to a partition can be a very slow operation, especially for larger tables because for each new data row, the partitioning definitions need to be verified to validate that the new row should indeed be in this partition.
Syntax extension to not require PARTITION keyword in each partition definition
Stored routines are now taking changes to metadata into account whenever the stored routine is called, done for any object a stored routine depends on.
In older release series a reconnect was needed before a stored routine was able to update its metadata from altered objects
In the above example both calls of the stored procedure p1 would have returned the same result set with the older release series.
Temporary tables are now included in information_schema.tables and included in SHOW TABLES and SHOW TABLE STATUS
Example:
General Support of Packages for Stored Routines
Before MariaDB Enterprise Server 11.4, the CREATE PACKAGE feature, as well as CREATE PACKAGE BODY, were only supported with sql_mode = ORACLE. They can now be used with any SQL mode.
Example:
System-Versioned Tables can automate the creation of new HISTORY partitions partitioned by INTERVAL/LIMIT using the keyword AUTO when creating a table.
In this case a new HISTORY partition will be created on a monthly basis, storing old versions of the table data.
Information About Application-time Period Tables
New views PERIOD and KEY_PERIOD_USAGE are added to information_schema.
A dump of historical data for system versioned tables is now possible via the new option --as-of for mariadb-dump
System versioned tables can now be dumped and restored by mariadb-dump
The new parameter mariadb-dump parameter --dump-history dumps all historical data
New collations based on the Unicode Collation Algorithm (UCA) 14.0.0 have been added for the character sets utf8mb3, utf8mb4, ucs2, utf16, utf32
One neutral and 22 language specific collations have been added
Case sensitive, case insensitive, and nopad variants have been added
Collations (how to compare characters) are now separated from character sets (how to store characters). Collation names no longer have to include character set names, and the same collation can apply to many character sets
Optimizations to information schema system tables
The information schema provides tables with metadata about stored procedures and stored routines, which are often used by third party tools, and MariaDB Connectors to retrieve details about existing routines. Previously, a high number of rows in these tables would have resulted in a performance impact. We have made a number of internal improvements to eliminate the performance impact completely for use cases, where metadata had to be queried regularly.
New optimizer cost model, a change from a more rule-based to a cost-based model. Huge effort went into improving the calculations of the optimizer costs, taking into account state of the art SSD disks. The new implementation also takes the different characteristics of a storage engine into account.
If a key lookup cannot be used, the optimizer can now make better choices when to use index scan, table scan, index merges, or other methods to join data.
Changes to the optimizer now allow the use of an index for a comparison of a DATE function to a constant value.
The optimizer also has been enhanced to allow single-table UPDATE and DELETE to take advantage of semi-join optimization.
Improved optimizer performance in a case of join with many eq_ref tables
Automatic SST User Account Management for MariaDB Enterprise Cluster
The State Snapshot Transfer (SST) method, needed to provide a full data copy to a new node, requires a dedicated account to access the remote server (donor) during the SST process.
The new MariaDB Enterprise Cluster (Galera) creates the user internally for the time of an SST, which makes the need to have an account created manually obsolete. This also removes the requirement to have a user and password provided via a configuration file. Having the user created by Galera also ensures that the needed privileges are set.
Incompatibility change: Replication is now using Global Transaction IDs (GTID) by default to make replicas crash safe
The default of CHANGE MASTER TO for master_use_gtid changes from no to slave_pos
A fresh slave start, a
New "SQL_BEFORE_GTIDS" and "SQL_AFTER_GTIDS" for START REPLICA UNTIL
The new options SQL_BEFORE_GTIDS and SQL_AFTER_GTIDS for START REPLICA UNTIL allow user control of whether the replica stops before or after a provided GTID state. Its syntax is:
When providing SQL_BEFORE_GTIDS="<gtid_list>", for each domain specified in the gtid_list, the replica will execute transactions up to the GTID found, and immediately stop processing events in that domain without executing the transaction of the specified GTID.
Once all domains have stopped, the replica will stop. Events originating from domains that are not specified in the list are not replicated.
START SLAVE UNTIL SQL_AFTER_GTIDS="<gtid_list>" is an alias to the default behavior of START SLAVE UNTIL master_gtid_pos="<gtid_list>", the known behavior before MariaDB Enterprise Server 11.4.
If a primary server has a binary log consisting of the following GTIDs:
Given a fresh replica (i.e., one with an empty GTID position, @@gtid_slave_pos=") is started with SQL_BEFORE_GTIDS, i.e.,
the resulting gtid_slave_pos of the replica will be "1-1-1" because the replica will execute only events from domain 1. When it sees the transaction with sequence number 2 it immediately stops without executing it.
If the replica is started with SQL_AFTER_GTIDS, i.e.,
the resulting gtid_slave_pos of the replica will be "1-1-2" because the replica will execute only events from domain 1. But in this case it stops after executing the provided GTID.
New, detailed replication lag representation
The Seconds_Behind_Master field of SHOW REPLICA STATUS can be complex and confusing, especially when parallel replication, delayed replication, or the option sql_slave_skip_counter is used. To help provide a consistent view of replication lag, three new fields have been added to the statement's output to provide specific timing information about the state of the IO and SQL threads.
New optimistic ALTER TABLE for replicas. When enabled by binlog_alter_two_phase = 1 (not default), an ALTER TABLE is executed on the primary server and is replicated and "started" on the replica server more or less in parallel to the primary server. Thus, the possibly huge replication lag between a primary and replica server due to a long running ALTER TABLE on the primary can be avoided.
The command line tool mariadb-binlog now supports the use of global transaction IDs (GTID) for the options start-position and stop-position. mariadb-binlog can now be used to produce results filtered by the defined GTIDs.
The command-line tool mariadb-binlog now supports the new options --do-domain-ids, --ignore-domain-ids, and --ignore-server-ids. mariadb-binlog can now be used to produce results filtered by domain ids server ids.
Client to Server connection now SSL Encrypted by Default
Using SSL/TLS has been simplified with MariaDB Enterprise Server 11.4. Before version 11.4, proper SSL configuration required multiple manual steps for the server, and all the clients connecting to it.
Now the client can verify the server self-signed certificate without any configuration whatsoever. The server completely automatically generates the SSL certificate and the client automatically verifies it as needed.
This will result in:
The new plugin password_reuse_check can be used to validate that a password cannot be reused. The number of days until a password can be reused can be configured via a new parameter password_reuse_check_interval
New allowlist for MariaDB Galera Cluster node to restrict the nodes which can join a cluster to increase security .
A new system variable wsrep_allowlist can be used to define a list of IP addresses. Only nodes from these IP addresses can join a running
Example with the new privilege SHOW CREATE ROUTINE:
SHOW CREATE ROUTINE privilege can be granted globally, per schema, or on individual routines.
Retrieve Users Privileges on a Specific Table
MariaDB Server provides information about privileges a user has to a table in different views in the INFORMATION_SCHEMA, split between global, schema, and table privileges. Before MariaDB Enterprise Server 11.4, there was no easy way to list all the tables a user has access to, this information has to be queried from several tables.
A new information Schema view, USERS, has been added, which DBAs can use to get insights about password related information for a user.
This information can be used:
by an application to inform a user about a password about to expire or an account which is at risk of being blocked due to the number of wrong passwords entered
Engine-defined attributes can now also be defined per-partition for more flexible configurations.
The space occupied by freed pages within the InnoDB system tablespace can be reclaimed by adding an :#autoshrink attribute to #innodb_data_file_path#, like:
This allows the system tablespace to be truncated after the last allocated page within it, all the way to the specified minimum size (here: 12MiB). In older release series InnoDB data files never shrink in size during normal operation. One could shrink .ibd files by rebuilding tables with OPTIMIZE TABLE, or the undo tablespace files by SET GLOBAL innodb_undo_log_truncate=ON.
Shrink temporary InnoDB tablespaces without restart
Before MariaDB Enterprise Server 11.4 the only way to reclaim disk space used by temporary InnoDB tablespaces was to restart the server, as temporary tablespaces are deleted when you stop the server, and are recreated with their configured size.
This triggers to reclaim the disk space, but existing tables will not be removed.
Example:
The process to import a InnoDB tablespace is now simplified. While the old process required to first create a table and to discard the tablespace before executing ALTER TABLE IMPORT TABLESPACE, now ALTER TABLE IMPORT TABLESPACE is the only command needed.
Example:
The InnoDB storage engine has gone through additional code cleanup and we have removed some configuration parameters used by the InnoDB Change Buffering feature, a feature which was disabled for earlier release series.
New system variables like innodb_log_file_buffering, innodb_data_file_buffering, innodb_log_file_write_through, and innodb_data_file_write_through have been added to allow better control for log files and data files. They can be set dynamically while the Server is running.
The SPIDER storage engine now allows the use of engine-defined attributes (table options), similar to other storage engines, and more convenient than the current method of providing parameters via COMMENT for a table.
Improved error reporting for INSERT that inserts multiple rows. The property ROW_NUMBER in GET DIAGNOSTICS allows retrieval of the row number that caused the error or warning.
Implementation of JSON histograms with detailed histogram collection, used when histogram_type=JSON_HB (not the default) is set. Using JSON histograms results in more precise data statistics over string data types or when columns have highly-uneven data distribution. More precise statistics allow the optimizer to create better query plans resulting in faster queries.
ANALYZE [FORMAT=JSON] <select>
The new value SENT_ROWS in the information schema table PROCESSLIST includes the number of rows sent by the current statement, shown in the processlist.
Selects with functions show the total number of rows sent by the main statement and all functions
Stored procedures show the total number of rows sent per stored procedure statement
The SQL Error Log Plugin can be used to log errors sent to clients for later analysis. When option sql_error_log_with_db_and_thread_info=ON is set, the log file is now also showing thread id, and the current default schema for the error.
See also:
New function JSON_KEY_VALUE(<json_doc>,<json_path>), which extracts key/value pairs from a JSON object. The JSON path parameter is used to only return key/value pairs for matching JSON objects.
Example:\
The function JSON_KEY_VALUE() can be used as an argument to JSON_TABLE(), which allows adding the key to a result set.
Example:
New function JSON_ARRAY_INTERSECT(<array1>, <array2>), used to find the intersection between two JSON arrays.
Example:
The new JSON function JSON_OBJECT_TO_ARRAY(<json_doc>) is used to convert all JSON objects found in a JSON document to JSON arrays where each item in the outer array represents a single key-value pair from the object.
Example:
Resulting arrays can be compared using JSON_ARRAY_INTERSECT():
The new JSON function JSON_OBJECT_FILTER_KEYS(<json_doc>,<array_keys>) returns key/value pairs from a JSON string for keys defined in <array_keys>.
Example:
By using JSON_ARRAY_INTERSECT() and JSON_KEY() as arguments for JSON_OBJECT_FILTER_KEYS(), a comparison of two JSON strings is possible where only the same keys are compared, not the key/value pairs.\
Example (only show key/value pairs of json1 where the key exists in json2):
To define the position in a JSON array from the end to the beginning, negative indexes or last can be used as the last element of an array for an JSON array of a JSON path, where the JSON path is used as a parameter in a JSON function.
Range notation for JSON path using the keyword to define a range of elements.
New function RANDOM_BYTES() which returns a binary string of a length between 1 and 1024 bytes. This nondeterministic value is generated by the random number generator of the SSL library, so it generates an arbitrary length string of cryptographic random bytes that are suitable for cryptographic use.
The encryption functions AES_ENCRYPT() and AES_DECRYPT() now support adding the two new parameters initialization vector (iv) and block encryption mode (mode).
Syntax for older release series:
New syntax:
If no mode is provided it will be used from the new system variable block_encryption_mode
Example (using the mode from system variable block_encryption_mode):
Example (mode provided as argument):
The new options %Z and %z can be used for the format string of the DATE_FORMAT function for adding time zone information to the date string
%Z Time zone abbreviation
%z Numeric time zone +hhmm or -hhmm presenting the hour and minute offset from UTC
Example:
The SQL function KDF() is a key derivation function, useful for generating encryption keys from a user provided password or a passphrase. It can be used to generate encryption keys for encryption functions such as AES_ENCRYPT.
kdf_name is "hkdf" or "pbkdf2_hmac"
width (in bits) can be any number divisible by 8
info is a non-secret parameter of the hkdf method, it allows to generate different encryption keys for different purposes from the same secret password
iterations is a positive numeric parameter of the pbkdf2_hmac method, larger values make the password more difficult to brute-force.
Example:
The function CONV() , which converts a number between numeric base systems, now supports conversions up to base 62. This allows conversions to encodings to capital letters A-Z, lower case letters a-z, and numbers 0-9. The old limit was 36, not including lower case letters.
Example:
Sorting
Functions like CAST
Changed default behavior TIMESTAMP field properties
The default for explicit_defaults_for_timestamp is set to ON resulting in removing the nonstandard behavior for TIMESTAMP fields in CREATE TABLE
The properties DEFAULT CURRENT_TIMESTAMP or ON UPDATE CURRENT_TIMESTAMP are not set anymore for the first TIMESTAMP field in a table if not explicitly set
A TIMESTAMP field does not get the property NOT NULL set anymore if not explicitly set
The old behavior can be achieved by setting the properties explicitly or by setting explicit_defaults_for_timestamp to OFF
Extended TIMESTAMP Maximum Value
For 64 bit systems the maximum value for the TIMESTAMP data type has been extended from ‘2038-01-19 03:14:07 UTC’ to ‘2106-02-07 06:28:15 UTC’ without changing the storage format.
System versioned tables use the maximum value of TIMESTAMP for their row_end field to indicate a currently valid value. These values should be changed changed to the new maximum value:
ALTER TABLE can be used for this conversion
A new option --update-history for mariadb-dump allows converting row_end values to the new maximum while running a dump
Option To Limit Disk Space Used For Temporary Files And Tables
When internal in-memory temporary tables are reaching a memory limit, they need to be stored on disk in directories defined via the tmpdir system variable.
Two new system variables have been added to define the maximum storage to be used for such temporary tables and other internally created temporary files:
max_tmp_session_space_usage – to limit the disc storage used per session
max_tmp_total_space_usage – to limit the total disc storage used by the MariaDB Server instance
Any query which does result in exceeding the limit of temporary storage will return with an error.
Two new status variables can be used to monitor the currently used storage:
tmp_space_used
max_tmp_space_used
See also . The space used per session is also shown in the view process_list in the information schema.
The Software Bill of Materials (SBOM) JSON file is generated in the downloads archive
Vector Embeddings
Vector embeddings are numerical representations [0.2, -0.5, 0.8, 0.1, ...] that capture semantic meaning or features of data in a multi-dimensional space. The embedding transforms simple to complex data such as text, images, audio, or video into a series of numbers (a vector) where similar items are positioned together in the multi-dimensional vector space.
For example, a word embedding of the word "dog" would be close in a vector embedding space to the word "puppy", whereas "dog" would not be close to the word "airplane". The embedding representation can be used in similarity search, recommendation systems, or more generally in traditional AI/ML systems and GenAI systems.
New Data Type VECTOR(N)
With the RC release a new data type VECTOR(N) has been added to represent the vector column, where N is the number of dimensions.
Example:
Conversion Functions
VEC_FromText() converts a text representation of the vector (a json array of numbers) to a vector (little-endian IEEE float sequence of bytes, 4 bytes per float).
Example:
VEC_ToText() converts a binary vector into a json array of numbers (floats).
Example:
Comparison Functions
Comparing vectors, calculating how close they are, is the key functionality needed by an application working with vector search. Two functions exist for calculating the distance between vectors. Which one to use depends on the application and on how the vectors were generated.
VEC_DISTANCE_EUCLIDEAN() takes two vectors and computes the straight line (L2) Euclidean distance between two multi-dimensional points in the vector space
Example:
VEC_DISTANCE_COSINE() measures the cosine distance between two vectors in a multi-dimensional vector space
Example:
Configuration Options
The vector search feature requires some for controlling the general behavior. Four new system variables have been added:
mhnsw_max_cache_size - Upper limit for one MHNSW vector index cache
ALTERThe OSC feature works by creating a change buffer for storing changes during the copying of data from the old format to the new one. While data is copied from the old table structure to the new one all changes are stored in the change buffer and the table is fully accessible. Once the copying process is complete the change buffer is applied to the new data structure only requiring a very short locking period.
Having an internal OSC in the server eliminates the need for using external command line tools in order to reduce table locks. These external tools often need to create complicated structures in the database (like triggers and stored procedures) and certain race conditions can lead to the operations never finishing.
In MariaDB Enterprise Server 11.4 a default ALTER operation will be an OSC operation if possible. If the operation cannot be performed as an OSC then another algorithm will be used. If the option LOCK=NONE is explicitly specified in the ALTER statement, then the operation will fail if it cannot be done as an OSC.
This process can now be sped up by disabling this validation. This new feature should be used with care, as it can lead to inconsistencies if the partitioning rules are not met.
The new addition to ALTER TABLE is:
PERIODS includes the columnsTABLE_CATALOG
TABLE_SCHEMA
TABLE_NAME
PERIOD_NAME
START_COLUMN_NAME
END_COLUMN_NAME
to list Application-time period tables, the name defined for a period, and the columns used for start and end timestamps.
View KEY_PERIOD_USAGE includes the columns
CONSTRAINT_CATALOG
CONSTRAINT_SCHEMA
CONSTRAINT_NAME
TABLE_CATALOG
TABLE_SCHEMA
TABLE_NAME
PERIOD_NAME
Two new columns are added to the COLUMNS view of information_schema
IS_SYSTEM_TIME_PERIOD_START
IS_SYSTEM_TIME_PERIOD_END
To restore from a dump file the new parameter system_versioning_insert_history needs to be enabled to allow direct inserts into ROW_START and ROW_END columns
The existing parameter secure_timestamp needs to be set to a value which allows changing session timestamps
The command-line tool mariadb-dump now supports the new option --order-by-size. The new option can be used to create a dump of the tables of a database according to their size, smaller tables first.
Parallelism for mariadb-dump
When mariadb-dump is used with the option -T / --tab= to produce tab-separated text-format data files per table, the new option --parallel (synonym --use-threads) can be used to use several threads in parallel to dump the table data to their .txt files. Parallelism also works if the option --single-tansaction is used.
The option --parallel has been added to mariadb-import as a synonym to --use-threads, which has been available before.
Improved contraction performance in UCA collations
Improved UCA collation performance for utf8mb3 and utf8mb4
Full UNICODE support for MariaDB command-line tools on recent versions of Windows (Windows 10 1909 or later, Windows 11, Windows Server 2020 supported). The server my.ini config file is now also encoded in UTF8. Command-line client mariadb.exe uses utf8mb4 by default.
The default collation used for a character set can now be changed globally or for a session via the new system variable character_set_collations. The default collation will be used whenever a character set is defined for a database object without defining the collation.
When not defining a character set the default collation is still the one defined with the system variable collation_server.
This is also preparatory work for changing default collations to use the UCA 14.0.0 standard. In particular, this variable will allow replication between servers with different default collations.
While one model may work well for a specific use case, it may not be the right model for other use cases. With the changes we've made, it's now possible to fine-tune the optimizer by changing costs for different metrics.
A new plugin is available, enabled via the plugin-wsrep-provider option. The options are split into separate options, if the plugin is used. The use of the plugin is optional.
RESET SLAVECHANGE MASTER TOmaster_use_gtidgtid_slave_posGlobal Limitation of Space Used by Binary Logs
The new system variable max_binlog_total_size (alias binlog_space_limit) enables binary log purging when the total size of all binary logs exceeds the specified threshold. The default for max_binlog_total_size is 0, meaning that there is no limit. The system variable can be changed without restarting the server.
The new system variable --slave-connections-needed-for-purge, set to 1 by default, assures that binary log purging will not happen until at least that many replicas are connected, and do not need purged binary logs anymore.
The new status variable binlog_disk_use can be used to query the disk space currently used by the binary logs.
Index for Binary Log on GTIDs
An index is now created on the GTIDs of the binary log, which allows a connecting replica to find the position it should start from without the need to scan the whole binary log.
The new system variable binlog_gtid_index (default ON) can be used to disable the creation of indexes.
The new system variable binlog_gtid_index_page_size (default 4096) defines the page size to use for the binary log GTID index.
The new system variable binlog_gtid_index_span_min (default 65536) controls the sparseness of the binary log GTID index.
The new status variables binlog_gtid_index_hit and binlog_gtid_index_miss can be used for monitoring purposes. A miss is an indication that the index file is missing.
GTID Binlog Events Now Include Thread ID
The thread ID and the corresponding statement can now be retrieved from binary logs.
The output of mariadb-dump also includes the thread ID.
A new option 'slave_max_statement_time' for the SQL thread has been added to allow to set a maximum allowed execution time for a replicated query
Allows to control the maximum time a replica can be behind a primary if the primary goal is to have a small replication lack, with the cost to be out of sync. It is therefore disabled by default.
The binary log filter options binlog-do-db, binlog-ignore-db, and binlog-row-event-max-size are now visible as system variables.
Example:
The replica will execute transactions originating from domain ids provided in the list, and will stop once all transactions provided in the UNTIL list have all been executed.
Example:
Master_last_event_timeTimestamp of the last event read from the primary by the IO thread
Slave_last_event_time
Timestamp from the primary of the last event committed on the replica
Master_Slave_time_diff
The difference of the above two timestamps
New (Password Authentication using Response Signed with Elliptic Curves).
PARSEC improves security over old authentication plugins by introducing salted passwords, time consuming key derivation function, and a client-side scramble to ensure that man-in-the-middle attackers cannot control the client response.
Example on how to create a user using the new authentication plugin:
The new SQL syntax GRANT .. TO PUBLIC can now be used to easily grant privileges to databases or tables for any user, who has access to the server.
SHOW GRANTS FOR PUBLIC is an enhancement to the existing SHOW GRANTS syntax to retrieve all privileges granted to public
The fine grained privileges have been removed from the SUPER privilege.
The SUPER privilege is still used for some special cases, like using DES_ENCRYPT and DES_DECRYPT without an explicit key, for debug settings, and some system variables for changing them with SET GLOBAL.
With MariaDB Enterprise Server 11.4, changes were made to privileges to allow real read only replicas with no other privileges. This was achieved by removing the READ ONLY ADMIN privilege from the SUPER privilege and making it a new privilege. Hence the READ ONLY ADMIN privilege now needs to be granted explicitly, if this user should have write access to a read only replica (a replica having read_only=1 set).
New privilege SHOW CREATE ROUTINE
Before MariaDB Enterprise Server 11.4 a user only could see the definition of a routine, a stored feature, or function, when either of the following was met:
They had been granted the SELECT privilege for the mysql.procs table
The user was the definer of the Stored Procedure
The new privilege SHOW CREATE ROUTINE, has been introduced to enable any user with this privilege to view the definition of a stored routine.
Example without privilege SHOW CREATE ROUTINE:
MariaDB Enterprise Server 11.4 now provides a new view privileges_by_table_by_level in the SYS schema that lists the privilege and privilege level per user, schema, and table.
Example:
by DBAs to query users which have been blocked because of too many invalid passwords entered
The new view includes the fields:
USER - A string including user name and host
PASSWORD_ERRORS - A counter with the current number of wrong passwords entered
Reset to 0 when a correct password has been entered
An account is blocked, if max_password_errors is reached
NULL for accounts with privilege CONNECTION ADMIN
PASSWORD_EXPIRATION_TIME - The date and time when the password expires or NULL, if the password never expires
Restarting the server is not always possible, while you still need to reclaim disk space. MariaDB Enterprise Server 11.4 can solve this problem by executing
Changes to the InnoDB redo log format to reduce write amplification, which can result in better performance.
The system variables innodb_write_io_threads and innodb_read_io_threads are now dynamic, and their values can be changed without restarting the server
SHOW ANALYZE FORMAT=JSON for <conn_id>ANALYZE FORMAT=JSON now shows the time spent in the query optimizer
In some cases optimizing the query can take a while. ANALYZE FORMAT=JSON now reports time as "query_optimization": { "r_total_time_ms": NNNN.NNN } in the JSON string
SHOW EXPLAIN for <conn_id>, which returns an EXPLAIN for a query running in another connection, has been extended to return the more detailed JSON output by using the syntax SHOW EXPLAIN [FORMAT=JSON] FOR <conn_id>.
Syntax EXPLAIN [ FORMAT=JSON] FOR CONNECTION <conn_id> is also supported.
New status monitoring features for MariaDB Enterprise Cluster:
New thread states in PROCESSLIST for MariaDB Galera Cluster allow better tracking of a session status
"waiting to execute in isolation"
"waiting for TOI DDL"
"waiting for flow control"
"waiting for certification"
MariaDB Enterprise Cluster has added a new feature to save wsrep node status changes in a dedicated machine readable JSON file. This allows an easier way for reading and interpreting the status file by an external monitoring tool. A filename needs to be specified via the option wsrep_status_file to enable the feature.
This JSON file also includes details about a node eviction status to the JSON file to report that a Galera node needs to be restarted to join the cluster.
MariaDB Enterprise Cluster now includes progress reporting of MariaDB Enterprise Backup based SST when wsrep-debug=1 is set and the tool pv is installed. The SST progress report is then written into the server log:
INSERT RETURNING and DELETE RETURNING show the total number of rows sent for the returning data set
Example:
SELECT JSON_KEY_VALUE('[[1, {"key1":"val1", "key2":"val2"}, 3], 2, 3]', '$[0][1]');+-----------------------------------------------------------------------------+
| JSON_KEY_VALUE('[[1, {"key1":"val1", "key2":"val2"}, 3], 2, 3]', '$[0][1]') |
+-----------------------------------------------------------------------------+
| [{"key": "key1", "value": "val1"}, {"key": "key2", "value": "val2"}] |
+-----------------------------------------------------------------------------+SELECT jt.* FROM JSON_TABLE(
JSON_KEY_VALUE('[[1, {"key1":"val1", "key2":"val2"}, 3], 2, 3]', '$[0][1]'),'$[*]'
COLUMNS (
k VARCHAR(20) PATH '$.key',
v VARCHAR(20) PATH '$.value',
id FOR ORDINALITY )) AS jt;+------+------+------+
| k | v | id |
+------+------+------+
| key1 | val1 | 1 |
| key2 | val2 | 2 |
+------+------+------+SET @array1= '[1,2,3]';
SET @array2= '[1,2,4]';
SELECT json_array_intersect(@array1, @array2) AS result;+--------+
| result |
+--------+
| [1, 2] |
+--------+SET @json1= '[[1,2,3],[4,5,6],[1,1,1]]';
SET @json2= '[[1,2,3],[4,5,6],[1,3,2]]';
SELECT json_array_intersect(@json1, @json2) AS result;+------------------------+
| result |
+------------------------+
| [[1, 2, 3], [4, 5, 6]] |
+------------------------+SET @json1= '{ "a" : [1,2,3] , "b": {"key1": "val1", "key2": {"key3": "val3"}} }';
SELECT JSON_OBJECT_TO_ARRAY(@json1) AS result;+-----------------------------------------------------------------------+
| result |
+-----------------------------------------------------------------------+
| [["a", [1, 2, 3]], ["b", {"key1": "val1", "key2": {"key3": "val3"}}]] |
+-----------------------------------------------------------------------+SET @json1='{"a":[1,2,3],"b":{"key1":"val1","key2":{"key3":"val3"}}}';
SET @json2='{"a":[1,2,3]}';
SELECT JSON_OBJECT_TO_ARRAY(@json1) INTO @array1;
SELECT JSON_OBJECT_TO_ARRAY(@json2) INTO @array2;
SELECT JSON_ARRAY_INTERSECT(@array1,@array2) as result;+--------------------+
| result |
+--------------------+
| [["a", [1, 2, 3]]] |
+--------------------+SET @json1= '{ "a": 1, "b": 2, "c": 3}';
SELECT JSON_OBJECT_FILTER_KEYS (@json1, ' ["b", "c"] ') AS result;+------------------+
| result |
+------------------+
| {"b": 2, "c": 3} |
+------------------+SET @json1= '{ "a": 1, "b": 2, "c": 3}';
SET @json2= '{"b" : 10, "c": 20, "d": 30}';
SELECT JSON_OBJECT_FILTER_KEYS (@json1, json_array_intersect(json_keys(@json1), json_keys(@json2))) AS result;+------------------+
| result |
+------------------+
| {"b": 2, "c": 3} |
+------------------+SELECT JSON_REMOVE(@json, '$.A[-10]');
SELECT JSON_REMOVE(@json, '$.A[last]');SELECT JSON_REMOVE(@json, '$.A[1 to 3]');AES_ENCRYPT(str,key_str)AES_ENCRYPT(str, key, [, iv [, mode]])SELECT @@block_encryption_mode;+-------------------------+
| @@block_encryption_mode |
+-------------------------+
| aes-128-ecb |
+-------------------------+SELECT HEX(AES_ENCRYPT('MariaDB','mykey','vector')) AS result;+----------------------------------+
| result |
+----------------------------------+
| CD0352A4B2FB18A592C04FF8CDA6C2F2 |
+----------------------------------+SELECT AES_DECRYPT(x'CD0352A4B2FB18A592C04FF8CDA6C2F2','mykey','vector') AS result;+---------+
| result |
+---------+
| MariaDB |
+---------+SELECT HEX(AES_ENCRYPT('MariaDB','mykey','thisismy256vector','aes-256-cbc')) AS result;+----------------------------------+
| result |
+----------------------------------+
| CD6C47183B89A813557BFD639A893CE3 |
+----------------------------------+SELECT AES_DECRYPT(x'CD6C47183B89A813557BFD639A893CE3','mykey','thisismy256vector','aes-256-cbc') AS result;+---------+
| result |
+---------+
| MariaDB |
+---------+DATE_FORMAT(DATE, format)SELECT DATE_FORMAT(NOW(), '%W %d %M %Y %H:%i:%s %Z %z');+--------------------------------------------------+
| DATE_FORMAT(NOW(), '%W %d %M %Y %H:%i:%s %Z %z') |
+--------------------------------------------------+
| Tuesday 21 November 2023 13:28:34 EST -0500 |
+--------------------------------------------------+KDF(key_str, salt [, {info | iterations} [, kdf_name [, width ]]])SELECT hex(kdf('foo', 'bar', 'info', 'hkdf'));+----------------------------------------+
| hex(kdf('foo', 'bar', 'info', 'hkdf')) |
+----------------------------------------+
| 710583081D40A55F0B573A76E02D8975 |
+----------------------------------------+insert into tbl values (aes_encrypt(@secret_data, kdf("Passw0rd", "NaCl", "info", 'hkdf'), "iv"));SELECT CONV(61,10,36);+----------------+
| CONV(61,10,36) |
+----------------+
| 1P |
+----------------+SELECT CONV(61,10,62);+----------------+
| CONV(61,10,62) |
+----------------+
| z |
+----------------+CREATE TABLE myVectorSearch (
id INT PRIMARY KEY,
v VECTOR(5) NOT NULL,
VECTOR INDEX (v)
);select hex(vec_fromtext('[1,2,3]'));
+------------------------------+
| hex(vec_fromtext('[1,2,3]')) |
+------------------------------+
| 0000803F0000004000004040 |
+------------------------------+EXCHANGE PARTITION partition_name WITH TABLE tbl_name [{WITH | WITHOUT} VALIDATION]
CONVERT TABLE normal_table TO partition_definition [{WITH | WITHOUT} VALIDATION]CREATE TABLE t1 (id INT);
INSERT INTO t1 VALUES (100);
CREATE PROCEDURE p1() SELECT * FROM t1;
CALL p1;+------+
| id |
+------+
| 100 |
+------+ALTER TABLE t1 ADD COLUMN b INT DEFAULT 0;
CALL p1;+------+------+
| id | b |
+------+------+
| 100 | 0 |
+------+------+CREATE DATABASE test;
USE test;
CREATE TABLE t1 (id INT);
CREATE TEMPORARY TABLE t2_temp (id INT);
SHOW FULL TABLE;+----------------+-----------------+
| Tables_in_test | Table_type |
+----------------+-----------------+
| t2_temp | TEMPORARY TABLE |
| t1 | BASE TABLE |
+----------------+-----------------+SELECT table_schema, table_name, table_type FROM information_schema.tables WHERE table_schema='test';+--------------+------------+------------+
| table_schema | table_name | table_type |
+--------------+------------+------------+
| test | t2_temp | TEMPORARY |
| test | t1 | BASE TABLE |
+--------------+------------+------------+DELIMITER $$
CREATE OR REPLACE PACKAGE myPkg
PROCEDURE p1();
FUNCTION f1() RETURNS INT;
END;
$$
CREATE OR REPLACE PACKAGE BODY myPkg
-- variable declarations
DECLARE v1 INT DEFAULT 1;
DECLARE v2 INT DEFAULT 10;
-- routine declarations
PROCEDURE p1()
BEGIN
SELECT v1, v2;
END;
FUNCTION f1() RETURNS INT
BEGIN
RETURN v1;
END;
-- package initialization
SET v1=v1 + 2;
END;
$$
DELIMITER ;
SELECT myPkg.f1();+------------+
| myPkg.f1() |
+------------+
| 3 |
+------------+CALL myPkg.p1();+------+------+
| v1 | v2 |
+------+------+
| 3 | 10 |
+------+------+CREATE TABLE t1 (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 months AUTO;SET @@character_set_collations='utf8mb4=uca1400_ai_ci';
CREATE DATABASE test_with_charset CHARACTER SET utf8mb4;
CREATE DATABASE test;
SELECT SCHEMA_NAME,DEFAULT_COLLATION_NAME FROM SCHEMATA WHERE SCHEMA_NAME LIKE "test%";+-------------------+------------------------+
| SCHEMA_NAME | DEFAULT_COLLATION_NAME |
+-------------------+------------------------+
| test_with_charset | utf8mb4_uca1400_ai_ci |
| test | utf8mb4_general_ci |
+-------------------+------------------------+DATE(datetime_column) = constSHOW GLOBAL VARIABLES WHERE
Variable_name LIKE 'binlog_do_db' OR
Variable_name LIKE 'binlog_ignore_db' OR
Variable_name LIKE 'binlog_row_event_max_size';+---------------------------+-------+
| Variable_name | Value |
+---------------------------+-------+
| binlog_do_db | |
| binlog_ignore_db | |
| binlog_row_event_max_size | 8192 |
+---------------------------+-------+START SLAVE UNTIL (SQL_BEFORE_GTIDS|SQL_AFTER_GTIDS)="<gtid_list>"0-1-1
1-1-1
0-1-2
1-1-2
0-1-3
1-1-3START SLAVE UNTIL SQL_BEFORE_GTIDS="1-1-2"START SLAVE UNTIL SQL_AFTER_GTIDS="1-1-2"CREATE USER 'MariaDBUser'@'%' IDENTIFIED VIA PARSEC USING PASSWORD('MyPassword123!');SHOW GRANTS FOR MariaDBUser@'%';
Grants FOR MariaDBUser@%
GRANT USAGE ON *.* TO `MariaDBUser`@`%` IDENTIFIED VIA parsec USING 'P0:lhXyNv1cIxpB8EnTxR7ON7S7:1l3rWRW1/jw45yrvYXB8eh02wzk7lcJcz4CMcWw2b+8'SHOW grants;+--------------------------------------------------+
| Grants for user1@% |
+--------------------------------------------------+
| GRANT USAGE ON *.* TO `user1`@`%` |
| GRANT SELECT, EXECUTE ON `test`.* TO `user1`@`%` |
+--------------------------------------------------+SHOW CREATE PROCEDURE myProc \G*************************** 1. row ***************************
Procedure: myProc
sql_mode: STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
Create Procedure: NULL
character_set_client: utf8mb3
collation_connection: utf8mb3_general_ci
Database Collation: utf8mb4_general_ciSHOW grants;+-----------------------------------------------------------------------+
| Grants for user1@% |
+-----------------------------------------------------------------------+
| GRANT USAGE ON *.* TO `user1`@`%` |
| GRANT SELECT, EXECUTE, SHOW CREATE ROUTINE ON `test`.* TO `user1`@`%` |
+-----------------------------------------------------------------------+SHOW CREATE PROCEDURE myProc \G*************************** 1. row ***************************
Procedure: myProc
sql_mode: STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
Create Procedure: CREATE DEFINER=`root`@`localhost` PROCEDURE `myProc`()
BEGIN
SELECT "My Definiton of a Stored Procedure";
END
character_set_client: utf8mb3
collation_connection: utf8mb3_general_ci
Database Collation: utf8mb4_general_ciCREATE DATABASE test;
USE test;
CREATE TABLE t1 (id INT);
CREATE USER user1;
GRANT SELECT, UPDATE ON *.* TO user1;
CREATE USER user2;
GRANT SELECT ON test.* TO user2;
CREATE USER user3;
GRANT SELECT ON test.t1 TO user3;
SELECT * FROM sys.privileges_by_table_by_level WHERE GRANTEE NOT LIKE "'root'@'%'";+--------------+------------+-------------+-----------+--------+
| TABLE_SCHEMA | TABLE_NAME | GRANTEE | PRIVILEGE | LEVEL |
+--------------+------------+-------------+-----------+--------+
| test | t1 | 'user1'@'%' | SELECT | GLOBAL |
| test | t1 | 'user1'@'%' | UPDATE | GLOBAL |
| test | t1 | 'user2'@'%' | SELECT | SCHEMA |
| test | t1 | 'user3'@'%' | SELECT | TABLE |
+--------------+------------+-------------+-----------+--------+[mariadb]
...
innodb_data_file_path=ibdata1:12M;ibdata2:50M:autoextend:autoshrinkSET GLOBAL innodb_truncate_temporary_tablespace_now=1;CREATE TEMPORARY TABLE t1(f1 INT NOT NULL, f2 INT NOT NULL)ENGINE=InnoDB;
INSERT INTO t1 SELECT seq, seq FROM seq_1_to_65536;
DROP TABLE t1;
SELECT NAME, FILE_SIZE FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES WHERE NAME="innodb_temp
orary";+------------------+-----------+
| NAME | FILE_SIZE |
+------------------+-----------+
| innodb_temporary | 79691776 |
+------------------+-----------+SET GLOBAL INNODB_TRUNCATE_TEMPORARY_TABLESPACE_NOW= 1;
SELECT NAME, FILE_SIZE FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES WHERE NAME="innodb_temp
orary";+------------------+-----------+
| NAME | FILE_SIZE |
+------------------+-----------+
| innodb_temporary | 12582912 |
+------------------+-----------+FLUSH TABLES t1 FOR EXPORT;
--copy_file $MYSQLD_DATADIR/test/t1.cfg $MYSQLD_DATADIR/test/t2.cfg
--copy_file $MYSQLD_DATADIR/test/t1.frm $MYSQLD_DATADIR/test/t2.frm
--copy_file $MYSQLD_DATADIR/test/t1.ibd $MYSQLD_DATADIR/test/t2.ibd
UNLOCK TABLES;
ALTER TABLE t2 IMPORT TABLESPACE;2022-03-24 13:10:43 0 [Note] WSREP: REPORTING SST PROGRESS: '{ "from": 1, "to": 3, "total": 23106759472, "done": 23106759472, "indefinite": -1 }'SELECT * FROM processlist\G*************************** 1. row ***************************
...
*************************** 2. row ***************************
ID: 6
USER: root
HOST: localhost
DB: test
COMMAND: Query
TIME: 1
STATE: Sending data
INFO: select * from t1
TIME_MS: 1340.406
STAGE: 0
MAX_STAGE: 0
PROGRESS: 0.000
MEMORY_USED: 89856
MAX_MEMORY_USED: 392544
EXAMINED_ROWS: 0
SENT_ROWS: 3895737
QUERY_ID: 436
INFO_BINARY: select * from t1
TID: 100SELECT JSON_OVERLAPS('{"A": 1, "B": {"C":2}}', '{"A": 2, "B": {"C":2}}') AS is_overlap;
+---------------------+
| is_overlap |
+---------------------+
| 1 |
+---------------------+mhnsw_default_distance - Default value for the DISTANCE vector index option
mhnsw_default_m - Default value for the M vector index option
mhnsw_ef_search - Minimal number of result candidates to look for in the vector index for ORDER BY ... LIMIT N queries.
SELECT VEC_ToText(x'e360d63ebe554f3fcdbc523f4522193f5236083d');
+---------------------------------------------------------+
| VEC_ToText(x'e360d63ebe554f3fcdbc523f4522193f5236083d') |
+---------------------------------------------------------+
| [0.418708,0.809902,0.823193,0.598179,0.033255] |
+---------------------------------------------------------+INSERT INTO v VALUES
(1, x'e360d63ebe554f3fcdbc523f4522193f5236083d'),
(2, x'f511303f72224a3fdd05fe3eb22a133ffae86a3f'),
(3,x'f09baa3ea172763f123def3e0c7fe53e288bf33e'),
(4,x'b97a523f2a193e3eb4f62e3f2d23583e9dd60d3f'),
(5,x'f7c5df3e984b2b3e65e59d3d7376db3eac63773e'),
(6,x'de01453ffa486d3f10aa4d3fdd66813c71cb163f'),
(7,x'76edfc3e4b57243f10f8423fb158713f020bda3e'),
(8,x'56926c3fdf098d3e2c8c5e3d1ad4953daa9d0b3e'),
(9,x'7b713f3e5258323f80d1113d673b2b3f66e3583f'),
(10,x'6ca1d43e9df91b3fe580da3e1c247d3f147cf33e');SELECT id FROM v
ORDER BY VEC_DISTANCE_EUCLIDEAN(v, x'6ca1d43e9df91b3fe580da3e1c247d3f147cf33e');+----+
| id |
+----+
| 10 |
| 7 |
| 3 |
| 9 |
| 2 |
| 1 |
| 5 |
| 4 |
| 6 |
| 8 |
+----+SELECT VEC_DISTANCE_COSINE(VEC_FROMTEXT('[1,2,3]'), VEC_FROMTEXT('[3,5,7]'));+-----------------------------------------------------------------------+
| VEC_DISTANCE_COSINE(VEC_FROMTEXT('[1,2,3]'), VEC_FROMTEXT('[3,5,7]')) |
+-----------------------------------------------------------------------+
| 0.00258509695694209 |
+-----------------------------------------------------------------------+Log write buffering added to the and plugins ()
The buffering is controlled by the added variables:
server_audit_file_buffer_size - defines the size of the buffer. The default value is 0 meaning there's no buffering at all. Setting non-zero value enables the buffering with the buffer of the specified size aligned by 8192. The maximum value is 65536.
Segmented key cache for Aria ()
A new variable (default 1) to define the number of segments has been added. The default disables the new feature.
Backport fixes for Vectors to MariaDB Enterprise 11.4 ()
server_audit_sync_log_file - flushes the buffer to the log file. While the log record is in the buffer, it cannot be seen in the log file. And if there are not many events to log, the time before records can be observed can be significant. So user can do SET GLOBAL server_audit_log_file=1 to this variable to force write the buffer to the file, to make sure he doesn't miss the recent records.
Two new fields are available via SHOW REPLICA STATUS (MENT-2129)
Connects_Tried, which provides the number of attempts the replica has made to connect to the primary, and
Master_Retry_Count, which provides the number of times the replica will attempt to connect to a primary before giving up.
The authentication plugin caching_sha2_password has been added, not loaded by default (MDEV-37600)
Sporadic segmentation faults possibly related to vector search
Incorrect error/docs for Vector column lengths (max = 65532
Unexpected ER_TABLE_EXISTS_ERROR on primary or replica upon CREATE OR REPLACE for partitioned table
Cannot access shared MEM_ROOT without a lock
Can't find record in 't1' on INSERT to Vector table
Create vector table failed with VECTOR INDEX when innodb_force_primary_key=on
Assertion when adding FK to MyISAM/Aria table with a vector index
UBSAN: 32801 is outside the range of representable values of type 'short'
Always release ctx in mhnsw_delete_all
Optimise dot_product by loop-unrolling by a factor of 4
Vector index search allocates too much memory for large ef_search
ALTER performs vector truncation without WARN_DATA_TRUNCATED or similar warnings/errors
DATA/INDEX DIRECTORY options are ignored for vector index
DATA/INDEX DIRECTORY handling is inconsistent
Assert with vector index and very long PK
InnoDB: Failing assertion: prebuilt->select_lock_type != LOCK_NONE || srv_read_only_mode || trx->read_view.is_open()
Crash on disconnect when dropped Aria table with vector key under lock
InnoDB assert with vector index under LOCK TABLES
mhnsw: support powerpc64 SIMD instructions
mhnsw: support aarch64 SIMD instructions
IGNORED attribute has no effect on vector keys
IMPORT TABLESPACE does not work for tables with vector, although allowed
Vector-related error messages worth improving when possible
Adding a regular index on a vector column leads to invalid table structure
Vector values do not survive mariadb-dump / restore
Server crashes in Charset::mbminlen / Item_func_vec_fromtext::val_str upon mixing vector type with string
Server crashes in mhnsw_read_first upon using vector key with views
Server crashes when checking/updating a table having vector key after enabling innodb_force_primary_key
ALTER TABLE re-creating vector key is no-op with non-copying alter algorithms (default)
Server crash in FVector::distance_to upon concurrent SELECT
Server crashes in Item_func_vec_distance_common::get_const_arg
Non-copying ALTER does not pad VECTOR column, vector search further does not work
Assertion bitmap_is_set(&read_partitions, next->id) failed in int partition_info::vers_set_hist_part(THD *)
This page is: Copyright © 2025 MariaDB. All rights reserved.