All pages
Powered by GitBook
1 of 76

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Special Functions

Explore special functions in MariaDB Server. This section details unique SQL functions that provide specialized capabilities, often related to server internals, diagnostics, or specific data handling.

Dynamic Columns Functions

Manage schema-less data within relational tables. These functions, such as COLUMN_CREATE and COLUMN_GET, allow you to store and retrieve variable sets of columns in a single BLOB field.

Galera Functions

Interact with the Galera Cluster plugin. These functions provide internal status information and control mechanisms specific to synchronous multi-master replication nodes.

JSON Functions

Manipulate JSON documents natively. Use these functions to extract values, modify structures, and query data stored in JSON format directly within your SQL statements.

Window Functions

Explore window functions in MariaDB Server. This section details SQL functions that perform calculations across a set of table rows related to the current row, enabling advanced analytical queries.

COLUMN_CHECK

Verify dynamic column integrity. This function checks if a blob containing dynamic columns is valid and returns 1 if it is, 0 otherwise.

Syntax

COLUMN_CHECK(dyncol_blob);

Description

Check if dyncol_blob is a valid packed dynamic columns blob. Return value of 1 means the blob is valid, return value of 0 means it is not.

Rationale: Normally, one works with valid dynamic column blobs. Functions like , , always return valid dynamic column blobs. However, if a dynamic column blob is accidentally truncated, or transcoded from one character set to another, it will be corrupted. This function can be used to check if a value in a blob field is a valid dynamic column blob.

This page is licensed: CC BY-SA / Gnu FDL

WSREP_LAST_SEEN_GTID

Return the GTID of the most recent write transaction. This function helps determine the transaction ID to use for synchronization waiting.

Syntax

WSREP_LAST_SEEN_GTID()

Description

Returns the Global Transaction ID of the most recent write transaction observed by the client.

The result can be useful to determine the transaction to provide to for waiting and unblocking purposes.

This page is licensed: CC BY-SA / Gnu FDL

Geographic Functions

Process geospatial data. This collection of functions allows you to create, analyze, and manipulate geometric shapes like points, lines, and polygons within your database.

Geographic and geometry functions. See Geographic Features for a full discussion of MariaDB's spatial extensions.

JSON_ARRAY

Create a JSON array. This function evaluates a list of values and returns a JSON array containing those values.

Syntax

JSON_ARRAY([value[, value2] ...])

Description

Returns a JSON array containing the listed values. The list can be empty.

Example

See also

  • , the CONNECT storage engine function

This page is licensed: CC BY-SA / Gnu FDL

JSON_COMPACT

Learn about the JSON_COMPACT function in MariaDB. This function removes all unnecessary whitespace from a JSON document, minimizing its size for efficient storage and transmission.

Syntax

JSON_COMPACT(json_doc)

Description

Removes all unnecessary spaces so the json document is as short as possible.

Example

See Also

  • covering JSON_COMPACT.

This page is licensed: CC BY-SA / Gnu FDL

COLUMN_CREATE

Create a dynamic column blob. This function generates a binary string containing specified column names and values for storage in a BLOB.

Syntax

Description

Returns a blob that stores the specified columns with values.

COLUMN_EXISTS

Check for a dynamic column. This function returns 1 if a specified column exists within a dynamic column blob, and 0 otherwise.

Syntax

Description

Checks if a column with name column_name

WSREP_SYNC_WAIT_UPTO_GTID

Wait for a specific GTID to be applied. This function blocks the client until the node has committed the transaction with the specified Global Transaction ID.

Syntax

Description

Blocks the client until the transaction specified by the given is applied and committed by the node.

JSON_ARRAY_INTERSECT

Available from version 11.2, this function compares two JSON arrays and returns a new array containing only the items present in both.

JSON_ARRAY_INTERSECT is available from MariaDB 11.2.

Syntax

JSON_SET

Insert or update data in a JSON document. This function inserts or updates data in a JSON document at a specified path and returns the result.

Syntax

Description

Updates or inserts data into a JSON document, returning the result, or NULL

WSREP_LAST_WRITTEN_GTID

Return the GTID of the client's last write. This function identifies the specific transaction ID generated by the current client's most recent write operation.

Syntax

Description

Returns the of the most recent write transaction performed by the client.

COLUMN_LIST

List dynamic column names. This function returns a comma-separated list of all column names contained within a dynamic column blob.

Syntax

Description

Returns a comma-separated list of column names. The names are quoted with backticks.

JSON_PRETTY

Learn about JSON_PRETTY in MariaDB. This function is an alias for JSON_DETAILED, serving to format JSON documents in a human-readable way by adding indentation and newlines.

JSON_PRETTY is available from MariaDB 10.10.3, 10.9.5, 10.8.7, 10.7.8, 10.6.12, 10.5.19, and 10.4.28.

JSON_PRETTY is an alias for .

This page is licensed: CC BY-SA / Gnu FDL

JSON_QUOTE

Quote a string as a JSON value. This function wraps a string with double quotes and escapes special characters to create a valid JSON string literal.

Syntax

Description

Quotes a string as a JSON value, usually for producing valid JSON string literals for inclusion in JSON documents. Wraps the string with double quote characters and escapes interior quotes and other special characters, returning a utf8mb4 string.

JSON_LENGTH

Return the length of a JSON document. This function returns the length of a JSON document or, if a path is given, the length of the value within the path.

Syntax

Description

Returns the length of a JSON document, or, if the optional path argument is given, the length of the value within the document specified by the path.

JSON_LOOSE

Discover JSON_LOOSE in MariaDB. This function adds spaces to a JSON document to improve its readability, providing a format that is easier for humans to scan than compact JSON.

Syntax

Description

Adds spaces to a JSON document to make it look more readable.

The return value is suitable for
  • storing in a table;

  • further modification with other dynamic columns functions.

The as type part allows one to specify the value type. In most cases, this is redundant because MariaDB will be able to deduce the type of the value. Explicit type specification may be needed when the type of the value is not apparent. For example, a literal '2012-12-01' has a CHAR type by default, one will need to specify '2012-12-01' AS DATE to have it stored as a date. See Dynamic Columns:Datatypes for further details.

Examples

This page is licensed: CC BY-SA / Gnu FDL

COLUMN_CREATE(column_nr, value [as type], [column_nr, value [as type]]...)
COLUMN_CREATE(column_name, value [as type], [column_name, value [as type]]...)
dynamic columns
exists in
dyncol_blob
. If yes, return
1
, otherwise return
0
. See
for more information.

This page is licensed: CC BY-SA / Gnu FDL

COLUMN_EXISTS(dyncol_blob, column_nr)
COLUMN_EXISTS(dyncol_blob, column_name)
dynamic columns
The optional timeout argument can be used to specify a block timeout in seconds. If not provided, the timeout will be indefinite.

Returns the node that applied and committed the Global Transaction ID, ER_LOCAL_WAIT_TIMEOUT if the function is timed out before this, or ER_WRONG_ARGUMENTS if the function is given an invalid GTID.

The result from WSREP_LAST_SEEN_GTID can be useful to determine the transaction to provide to WSREP_SYNC_WAIT_UPTO_GTID for waiting and unblocking purposes.

This page is licensed: CC BY-SA / Gnu FDL

WSREP_SYNC_WAIT_UPTO_GTID(gtid[,timeout])
Global Transaction ID
Description

Finds intersection between two json arrays and returns an array of items found in both array.

Examples

This page is licensed: CC BY-SA / Gnu FDL

JSON_ARRAY_INTERSECT(arr1, arr2)
SET @json1= '[1,2,3]';
SET @json2= '[1,2,4]';

SELECT json_array_intersect(@json1, @json2); 
+--------------------------------------+
| json_array_intersect(@json1, @json2) |
+--------------------------------------+
| [1, 2]                               |
+--------------------------------------+
if any of the arguments are
NULL
or the optional path fails to find an object.

An error will occur if the JSON document is invalid, the path is invalid or if the path contains a * or a wildcard**.**

JSON_SET can update or insert data, while JSON_REPLACE can only update, and JSON_INSERT only insert.

Examples

This page is licensed: CC BY-SA / Gnu FDL

JSON_SET(json_doc, path, val[, path, val] ...)
SELECT JSON_SET(Priv, '$.locked', 'true') FROM mysql.global_priv
This page is licensed: CC BY-SA / Gnu FDL
WSREP_LAST_WRITTEN_GTID()
Global Transaction ID
See dynamic columns for more information.

This page is licensed: CC BY-SA / Gnu FDL

COLUMN_LIST(dyncol_blob);

Returns NULL if the argument is NULL.

Examples

This page is licensed: CC BY-SA / Gnu FDL

JSON_QUOTE(json_value)
SELECT JSON_QUOTE('A'), JSON_QUOTE("B"), JSON_QUOTE('"C"');
+-----------------+-----------------+-------------------+
| JSON_QUOTE('A') | JSON_QUOTE("B") | JSON_QUOTE('"C"') |
+-----------------+-----------------+-------------------+
| "A"             | "B"             | "\"C\""           |
+-----------------+-----------------+-------------------+
Returns NULL if any of the arguments argument are null or the path argument does not identify a value in the document.

An error occurs if the JSON document is invalid, the path is invalid or if the path contains a * or ** wildcard.

Length will be determined as follow:

  • A scalar's length is always 1.

  • If an array, the number of elements in the array.

  • If an object, the number of members in the object.

The length of nested arrays or objects are not counted.

Examples

This page is licensed: CC BY-SA / Gnu FDL

JSON_LENGTH(json_doc[, path])
Example

This page is licensed: CC BY-SA / Gnu FDL

JSON_LOOSE(json_doc)
SET @j = '{ "A":1,"B":[2,3]}';

SELECT JSON_LOOSE(@j), @j;
+-----------------------+--------------------+
| JSON_LOOSE(@j)        | @j                 |
+-----------------------+--------------------+
| {"A": 1, "B": [2, 3]} | { "A":1,"B":[2,3]} |
+-----------------------+--------------------+
SELECT Json_Array(56, 3.1416, 'My name is "Foo"', NULL);
+--------------------------------------------------+
| Json_Array(56, 3.1416, 'My name is "Foo"', NULL) |
+--------------------------------------------------+
| [56, 3.1416, "My name is \"Foo\"", null]         |
+--------------------------------------------------+
JSON_MAKE_ARRAY
SET @j = '{ "A": 1, "B": [2, 3]}';

SELECT JSON_COMPACT(@j), @j;
+-------------------+------------------------+
| JSON_COMPACT(@j)  | @j                     |
+-------------------+------------------------+
| {"A":1,"B":[2,3]} | { "A": 1, "B": [2, 3]} |
+-------------------+------------------------+
JSON video tutorial
COLUMN_CREATE
COLUMN_ADD
COLUMN_DELETE
WSREP_SYNC_WAIT_UPTO_GTID
JSON_DETAILED

COLUMN_JSON

Convert dynamic columns to JSON. This function returns a JSON string representation of the data stored in a dynamic column blob.

Syntax

COLUMN_JSON(dyncol_blob)

Description

Returns a JSON representation of data in dyncol_blob. Can also be used to display nested columns. See for more information.

Example

Limitation: COLUMN_JSON decodes nested dynamic columns at a nesting level of not more than 10 levels deep. Dynamic columns that are nested deeper than 10 levels are shown as BINARY string, without encoding.

This page is licensed: CC BY-SA / Gnu FDL

COLUMN_GET

Retrieve a dynamic column value. This function extracts a specific column's value from a dynamic column blob, casting it to a specified type.

Syntax

COLUMN_GET(dyncol_blob, column_nr as type)
COLUMN_GET(dyncol_blob, column_name as type)

Description

Gets the value of a dynamic column by its name. If no column with the given name exists, NULL will be returned.

column_name as type requires that one specify the datatype of the dynamic column they are reading.

This may seem counter-intuitive: why would one need to specify which datatype they're retrieving? Can't the dynamic columns system figure the datatype from the data being stored?

The answer is: SQL is a statically-typed language. The SQL interpreter needs to know the datatypes of all expressions before the query is run (for example, when one is using prepared statements and runs "select COLUMN_GET(...)", the prepared statement API requires the server to inform the client about the datatype of the column being read before the query is executed and the server can see what datatype the column actually has).

Lengths

Suppose running a query like this:

Without specifying a maximum length (i.e. using as CHAR, not as CHAR(n)), MariaDB will report the maximum length of the result set column to be 16,777,216. This may cause excessive memory usage in some client libraries, because they try to pre-allocate a buffer of maximum result set width. To avoid this problem, use CHAR(n) whenever you're using COLUMN_GET in the select list.

See for more information about datatypes.

This page is licensed: CC BY-SA / Gnu FDL

Differences between JSON_QUERY and JSON_VALUE

Learn the difference between JSON_QUERY and JSON_VALUE in MariaDB. JSON_QUERY returns objects or arrays, while JSON_VALUE extracts scalar values.

The primary difference between the two functions is that JSON_QUERY returns an object or an array, while JSON_VALUE returns a scalar.

Take the following JSON document as an example:

SET @json='{ "x": [0,1], "y": "[0,1]", "z": "Monty" }';

Note that data member "x" is an array, and data members "y" and "z" are strings. The following examples demonstrate the differences between the two functions.

SELECT JSON_QUERY(@json,'$'), JSON_VALUE(@json,'$');
+--------------------------------------------+-----------------------+
| JSON_QUERY(@json,'$')                      | JSON_VALUE(@json,'$') |
+--------------------------------------------+-----------------------+
| { "x": [0,1], "y": "[0,1]", "z": "Monty" } | NULL                  |
+--------------------------------------------+-----------------------+

SELECT JSON_QUERY(@json,'$.x'), JSON_VALUE(@json,'$.x');
+-------------------------+-------------------------+
| JSON_QUERY(@json,'$.x') | JSON_VALUE(@json,'$.x') |
+-------------------------+-------------------------+
| [0,1]                   | NULL                    |
+-------------------------+-------------------------+

SELECT JSON_QUERY(@json,'$.y'), JSON_VALUE(@json,'$.y');
+-------------------------+-------------------------+
| JSON_QUERY(@json,'$.y') | JSON_VALUE(@json,'$.y') |
+-------------------------+-------------------------+
| NULL                    | [0,1]                   |
+-------------------------+-------------------------+

SELECT JSON_QUERY(@json,'$.z'), JSON_VALUE(@json,'$.z');
+-------------------------+-------------------------+
| JSON_QUERY(@json,'$.z') | JSON_VALUE(@json,'$.z') |
+-------------------------+-------------------------+
| NULL                    | Monty                   |
+-------------------------+-------------------------+

SELECT JSON_QUERY(@json,'$.x[0]'), JSON_VALUE(@json,'$.x[0]');
+----------------------------+----------------------------+
| JSON_QUERY(@json,'$.x[0]') | JSON_VALUE(@json,'$.x[0]') |
+----------------------------+----------------------------+
| NULL                       | 0                          |
+----------------------------+----------------------------+

This page is licensed: CC BY-SA / Gnu FDL

JSON_ARRAYAGG

Aggregate values into a JSON array. This function aggregates a result set column into a single JSON array.

JSON_ARRAYAGG is available from MariaDB 10.5.

Syntax

JSON_ARRAYAGG(column_or_expression)

Description

JSON_ARRAYAGG returns a JSON array containing an element for each value in a given set of JSON or SQL values. It acts on a column or an expression that evaluates to a single value.

The maximum returned length in bytes is determined by the server system variable.

Returns NULL in the case of an error, or if the result contains no rows.

JSON_ARRAYAGG cannot currently be used as a .

The full syntax is as follows:

Examples

This page is licensed: CC BY-SA / Gnu FDL

JSON_ARRAY_APPEND

Explore JSON_ARRAY_APPEND in MariaDB. This function appends values to the end of specified arrays within a JSON document, returning the modified result.

Syntax

JSON_ARRAY_APPEND(json_doc, path, value[, path, value] ...)

Description

Appends values to the end of the specified arrays within a JSON document, returning the result, or NULL if any of the arguments are NULL.

Evaluation is performed from left to right, with the resulting document from the previous pair becoming the new value against which the next pair is evaluated.

If the json_doc is not a valid JSON document, or if any of the paths are not valid, or contain a * or ** wildcard, an error is returned.

Examples

This page is licensed: CC BY-SA / Gnu FDL

JSON_CONTAINS_PATH

Check for a JSON path. This function returns 1 if a specified path exists within a JSON document, or 0 if it does not.

Syntax

JSON_CONTAINS_PATH(json_doc, return_arg, path[, path] ...)

Description

Indicates whether the given JSON document contains data at the specified path or paths. Returns 1 if it does, 0 if not and NULL if any of the arguments are null.

The return_arg can be one or all:

  • one - Returns 1 if at least one path exists within the JSON document.

  • all - Returns 1 only if all paths exist within the JSON document.

Examples

This page is licensed: CC BY-SA / Gnu FDL

JSON_DEPTH

Discover JSON_DEPTH in MariaDB. This function returns the maximum depth of a JSON document, assigning a depth of 1 to scalars and empty structures, and higher values for nested data.

Syntax

Description

Returns the maximum depth of the given JSON document, or NULL if the argument is null. An error occurs if the argument is an invalid JSON document.

  • Scalar values or empty arrays or objects have a depth of 1.

  • Arrays with only scalar values and objects with only scalar values for all keys have depth of 1.

  • In all other cases, the depth can be 2 or greater.

There is no maximum depth level — it's unlimited.

For more information, see .

The maximum depth is 32.

Examples

This page is licensed: CC BY-SA / Gnu FDL

JSON_EXISTS

Explore JSON_EXISTS in MariaDB. This function checks whether a specified JSON document contains an element at a given path, returning 1 for existence and 0 otherwise.

Syntax

JSON_EXISTS(json_doc, json_path)

Description

Determines whether json_doc has an element pointed to by path json_path. Returns 1 if the element exists, 0 if not, or NULL if any of the inputs were NULL.

Examples

This page is licensed: CC BY-SA / Gnu FDL

JSON_REMOVE

Remove data from a JSON document. This function removes data from a JSON document at a specified path and returns the result.

Syntax

JSON_REMOVE(json_doc, path[, path] ...)

Description

Removes data from a JSON document returning the result, or NULL if any of the arguments are null. If the element does not exist in the document, no changes are made.

The function returns NULL and throws a warning if the JSON document is invalid, the path is invalid, contains a range, or contains a * or ** wildcard.

Path arguments are evaluated from left to right, with the result from the earlier evaluation being used as the value for the next.

Examples

See Also

  • covering JSON_REMOVE.

This page is licensed: CC BY-SA / Gnu FDL

DENSE_RANK

Calculate rank without gaps. This function assigns a rank to each row within a partition, with tied values receiving the same rank and no numbers skipped.

Syntax

DENSE_RANK() OVER (
  [ PARTITION BY partition_expression ]
  [ ORDER BY order_list ]
)

Description

DENSE_RANK() is a that displays the number of a given row, starting at one and following the sequence of the window function, with identical values receiving the same result. Unlike the function, there are no skipped values if the preceding results are identical. It is also similar to the function except that in that function, identical values will receive a different row number for each result.

Examples

The distinction between DENSE_RANK(), and :

See Also

This page is licensed: CC BY-SA / Gnu FDL

MEDIAN

Calculate the median value. This window function returns the middle value (50th percentile) of an ordered set of values within the window.

Syntax

MEDIAN(median expression) OVER (
  [ PARTITION BY partition_expression ] 
)

Description

MEDIAN() is a that returns the median value of a range of values.

It is a specific case of , with an argument of 0.5 and the column the one in MEDIAN's argument.

Is equivalent to:

Examples

See Also

This page is licensed: CC BY-SA / Gnu FDL

COLUMN_ADD

Update dynamic columns. This function adds or updates values within a dynamic column blob, returning the new blob content.

Syntax

COLUMN_ADD(dyncol_blob, column_nr, value [as type], [column_nr, value [as type]]...)
COLUMN_ADD(dyncol_blob, column_name, value [as type], [column_name, value [as type]]...)

Description

Adds or updates dynamic columns.

  • dyncol_blob must be either a valid dynamic columns blob (for example, COLUMN_CREATE returns such blob), or an empty string.

  • column_name specifies the name of the column to be added. If dyncol_blob already has a column with this name, it will be overwritten.

  • value specifies the new value for the column. Passing a NULL value will cause the column to be deleted.

  • as type is optional. See section for a discussion about types.

The return value is a dynamic column blob after the modifications.

Examples

Note: COLUMN_ADD() is a regular function (just like ), hence, in order to update the value in the table you have to use the UPDATE ... SET dynamic_col=COLUMN_ADD(dynamic_col, ....) pattern.

This page is licensed: CC BY-SA / Gnu FDL

JSON_CONTAINS

Check for JSON containment. This function returns 1 if a candidate JSON document is contained within a target JSON document, or 0 otherwise.

Syntax

JSON_CONTAINS(json_doc, val[, path])

Description

Returns whether or not the specified value is found in the given JSON document or, optionally, at the specified path within the document. Returns 1 if it does, 0 if not and NULL if any of the arguments are null. An error occurs if the document or path is not valid, or contains the * or ** wildcards.

Examples

This page is licensed: CC BY-SA / Gnu FDL

JSON_MERGE_PATCH

Learn about JSON_MERGE_PATCH in MariaDB. This RFC 7396-compliant function merges JSON documents by overwriting duplicate keys, serving as a modern replacement for the deprecated JSON_MERGE.

Syntax

JSON_MERGE_PATCH(json_doc, json_doc[, json_doc] ...)

Description

Merges the given JSON documents, returning the merged result, or NULL if any argument is NULL.

JSON_MERGE_PATCH is an RFC 7396-compliant replacement for , which is deprecated.

Unlike , members with duplicate keys are not preserved.

Example

This page is licensed: CC BY-SA / Gnu FDL

COLUMN_DELETE

Remove dynamic columns. This function deletes specified columns from a dynamic column blob and returns the updated blob.

Syntax

COLUMN_DELETE(dyncol_blob, column_nr, column_nr...)
COLUMN_DELETE(dyncol_blob, column_name, column_name...)

Description

Deletes a dynamic column with the specified name. Multiple names can be given. The return value is a dynamic column blob after the modification.

This page is licensed: CC BY-SA / Gnu FDL

JSON_ARRAY_INSERT

This function inserts values into a JSON document at a specified path, returning the modified document, and supports evaluating multiple path-value pairs sequentially.

Syntax

JSON_ARRAY_INSERT(json_doc, path, value[, path, value] ...)

Description

Inserts a value into a JSON document, returning the modified document, or NULL if any of the arguments are NULL.

Evaluation is performed from left to right, with the resulting document from the previous pair becoming the new value against which the next pair is evaluated.

If the json_doc is not a valid JSON document, or if any of the paths are not valid, or contain a * or ** wildcard, an error is returned.

Examples

This page is licensed: CC BY-SA / Gnu FDL

JSON_DETAILED

Explore JSON_DETAILED in MariaDB. Also known as JSON_PRETTY, this function formats JSON documents with indentation and newlines to emphasize nested structures for readability.

Syntax

Description

Represents JSON in the most understandable way emphasizing nested structures.

JSON_PRETTY is an alias for JSON_DETAILED .

JSON_PRETTY is not available as an alias for JSON_DETAILED .

Example

See Also

  • covering JSON_DETAILED.

This page is licensed: CC BY-SA / Gnu FDL

JSON_OBJECT_FILTER_KEYS

Discover JSON_OBJECT_FILTER_KEYS in MariaDB. Available from version 11.2, this function returns a new JSON object containing only the key-value pairs where the keys match those provided in a specified

JSON_OBJECT_FILTER_KEYS is available from MariaDB 11.2.

Syntax

JSON_OBJECT_FILTER_KEYS(obj, array_keys)

Description

JSON_OBJECT_FILTER_KEYS returns a JSON object with keys from the object that are also present in the array as string. It is used when one wants to get key-value pair such that the keys are common but the values may not be common.

Example

This page is licensed: CC BY-SA / Gnu FDL

JSON_EXTRACT

Extract data from a JSON document. This function returns data from a JSON document selected by a given path.

Syntax

JSON_EXTRACT(json_doc, path[, path] ...)

Description

Extracts data from a JSON document. The extracted data is selected from the parts matching the path arguments. Returns all matched values; either as a single matched value, or, if the arguments could return multiple values, a result autowrapped as an array in the matching order.

Returns NULL if no paths match or if any of the arguments are NULL.

An error occurs if any path argument is not a valid path, or if the json_doc argument is not a valid JSON document.

The path expression be a as supported by MariaDB

Examples

See Also

  • covering JSON_EXTRACT.

This page is licensed: CC BY-SA / Gnu FDL

JSON_NORMALIZE

Discover JSON_NORMALIZE in MariaDB. This function recursively sorts keys and removes spaces from a JSON document, enabling reliable equality comparisons and unique constraints on JSON data.

JSON_NORMALIZE is available from MariaDB 10.7.

Syntax

JSON_NORMALIZE(json)

Description

Recursively sorts keys and removes spaces, allowing comparison of json documents for equality.

Examples

We may wish our application to use the database to enforce a unique constraint on the JSON contents, and we can do so using the JSON_NORMALIZE function in combination with a unique key.

For example, if we have a table with a JSON column:

Add a unique constraint using JSON_NORMALIZE like this:

We can test this by first inserting a row as normal:

And then seeing what happens with a different string which would produce the same JSON object:

See Also

This page is licensed: CC BY-SA / Gnu FDL

JSON_REPLACE

Replace values in a JSON document. This function replaces existing values in a JSON document and returns the result.

Syntax

JSON_REPLACE(json_doc, path, val[, path, val] ...)

Description

Replaces existing values in a JSON document, returning the result, or NULL if any of the arguments are NULL.

An error will occur if the JSON document is invalid, the path is invalid or if the path contains a * or ** wildcard.

Paths and values are evaluated from left to right, with the result from the earlier evaluation being used as the value for the next.

JSON_REPLACE can only update data, while can only insert. can update or insert data.

Examples

See Also

  • covering JSON_REPLACE.

This page is licensed: CC BY-SA / Gnu FDL

JSON_EQUALS

Discover JSON_EQUALS in MariaDB. This function checks for equality between two JSON objects, returning 1 if they are equal, 0 if not, handling key order and data types intelligently.

JSON_EQUALS is available from MariaDB 10.7.

Syntax

JSON_EQUALS(json1, json2)

Description

Checks if there is equality between two json objects. Returns 1 if it there is, 0 if not, or NULL if any of the arguments are null.

Examples

See Also

This page is licensed: CC BY-SA / Gnu FDL

JSON_OBJECT

Create a JSON object. This function evaluates a list of key-value pairs and returns a JSON object containing those pairs.

Syntax

JSON_OBJECT([key, value[, key, value] ...])

Description

Returns a JSON object containing the given key/value pairs. The key/value list can be empty.

An error will occur if there are an odd number of arguments, or any key name is NULL.

Example

See also

  • , the CONNECT storage engine function

This page is licensed: CC BY-SA / Gnu FDL

JSON_UNQUOTE

Unquote a JSON value. This function removes the quotes from a JSON string and unescapes special characters.

Syntax

Description

Unquotes a JSON value, returning a string, or NULL

Aggregate Functions as Window Functions

Use aggregate functions over windows. Standard aggregates like SUM and AVG can be used with an OVER clause to calculate running totals and moving averages.

It is possible to use as window functions. An aggregate function used as a window function must have the OVER clause. For example, here's used as a window function:

MariaDB currently allows these aggregate functions to be used as window functions:

JSON_KEY_VALUE

Explore JSON_KEY_VALUE in MariaDB. Available from version 11.2, this function extracts key/value pairs from a JSON object, enabling easier data transformation and usage with JSON_TABLE.

JSON_KEY_VALUE is available from MariaDB 11.2.

Syntax

JSON_OBJECT_TO_ARRAY

Explore JSON_OBJECT_TO_ARRAY in MariaDB. Available from version 11.2, this function converts JSON objects into arrays of key-value pairs, facilitating operations like finding common values when used w

JSON_OBJECT_TO_ARRAY is available from MariaDB 11.2.

Syntax

JSON_MERGE_PRESERVE

Explore JSON_MERGE_PRESERVE in MariaDB. This function merges JSON documents while keeping all members, including those with duplicate keys, effectively acting as a synonym for the legacy JSON_MERGE.

Syntax

Description

Merges the given JSON documents, returning the merged result, or NULL

JSON_VALID

Check if a value is valid JSON. This function returns 1 if the argument is a valid JSON document, and 0 otherwise.

Syntax

Description

Indicates whether the given value is a valid JSON document or not. Returns 1

JSON_KEYS

Return keys from a JSON object. This function returns the keys from the top-level value of a JSON object as a JSON array.

Syntax

Description

Returns the keys as a JSON array from the top-level value of a JSON object or, if the optional path argument is provided, the top-level keys from the path.

JSON_INSERT

Learn about the JSON_INSERT function in MariaDB. This function inserts new data into a JSON document without replacing existing values, returning the updated document or NULL.

Syntax

Description

Inserts data into a JSON document, returning the resulting document or NULL

JSON_OVERLAPS

Discover JSON_OVERLAPS in MariaDB. Available from version 10.9, this function compares two JSON documents, returning true if they share at least one common key-value pair, array element, or scalar val

JSON_OVERLAPS is available from MariaDB 10.9.

Syntax

JSON_SEARCH

Search for a value in a JSON document. This function returns the path to the given string within a JSON document.

Syntax

Description

Returns the path to the given string within a JSON document, or NULL

JSON_TYPE

Return the type of a JSON value. This function returns a string indicating the type of a JSON value (e.g., OBJECT, ARRAY, INTEGER).

Syntax

Description

Returns the type of a JSON value (as a string), or NULL

JSON_OBJECTAGG

Aggregate key-value pairs into a JSON object. This function aggregates two columns or expressions into a single JSON object.

JSON_OBJECTAGG is available from MariaDB 10.5.

Syntax

JSON_VALUE

Extract a scalar value from a JSON document. This function extracts a value from a JSON document at a given path and returns it as a scalar.

Syntax

Description

Given a JSON document, returns the scalar specified by the path. Returns NULL if not given a valid JSON document, or if there is no match.

JSON_QUERY

Extract a JSON object or array. This function extracts data from a JSON document at a given path, returning a JSON object or array.

Syntax

Description

Given a JSON document, returns an object or array specified by the path. Returns NULL

LEAD

Access data from a following row. This function returns the value of an expression from a row at a specified physical offset after the current row.

Syntax

Description

The LEAD

LAG

Access data from a preceding row. This function returns the value of an expression from a row at a specified physical offset prior to the current row.

Syntax

Description

The LAG

INSERT INTO tbl SET dyncol_blob=COLUMN_CREATE("column_name", "value");
JSON_DEPTH(json_doc)
JSON_DETAILED(json_doc[, tab_size])
JSON_PRETTY(json_doc[, tab_size])
SELECT item_name, COLUMN_JSON(dynamic_cols) FROM assets;
+-----------------+----------------------------------------+
| item_name       | COLUMN_JSON(dynamic_cols)              |
+-----------------+----------------------------------------+
| MariaDB T-shirt | {"size":"XL","color":"blue"}           |
| Thinkpad Laptop | {"color":"black","warranty":"3 years"} |
+-----------------+----------------------------------------+
dynamic columns
SELECT COLUMN_GET(BLOB, 'colname' AS CHAR) ...
Dynamic Columns:Datatypes
JSON_ARRAYAGG([DISTINCT] expr
             [ORDER BY {unsigned_integer | col_name | expr}
                 [ASC | DESC] [,col_name ...]]
             [LIMIT {[offset,] row_count | row_count OFFSET offset}])
group_concat_max_len
window function
this blog post
JSON video tutorial
window function
ORDER BY
RANK()
ROW_NUMBER()
RANK()
ROW_NUMBER()
RANK()
ROW_NUMBER()
ORDER BY
window function
PERCENTILE_CONT
ORDER BY
PERCENTILE_CONT
#datatypes
CONCAT()
SET @json = '{"A": 0, "B": {"C": 1}, "D": 2}';

SELECT JSON_CONTAINS(@json, '2', '$.A');
+----------------------------------+
| JSON_CONTAINS(@json, '2', '$.A') |
+----------------------------------+
|                                0 |
+----------------------------------+

SELECT JSON_CONTAINS(@json, '2', '$.D');
+----------------------------------+
| JSON_CONTAINS(@json, '2', '$.D') |
+----------------------------------+
|                                1 |
+----------------------------------+

SELECT JSON_CONTAINS(@json, '{"C": 1}', '$.A');
+-----------------------------------------+
| JSON_CONTAINS(@json, '{"C": 1}', '$.A') |
+-----------------------------------------+
|                                       0 |
+-----------------------------------------+

SELECT JSON_CONTAINS(@json, '{"C": 1}', '$.B');
+-----------------------------------------+
| JSON_CONTAINS(@json, '{"C": 1}', '$.B') |
+-----------------------------------------+
|                                       1 |
+-----------------------------------------+
SET @json1 = '[1, 2]';
SET @json2 = '[2, 3]';
SELECT JSON_MERGE_PATCH(@json1,@json2),JSON_MERGE_PRESERVE(@json1,@json2);
+---------------------------------+------------------------------------+
| JSON_MERGE_PATCH(@json1,@json2) | JSON_MERGE_PRESERVE(@json1,@json2) |
+---------------------------------+------------------------------------+
| [2, 3]                          | [1, 2, 2, 3]                       |
+---------------------------------+------------------------------------+
JSON_MERGE
JSON_MERGE_PRESERVE
SET @json = '[1, 2, [3, 4]]';

SELECT JSON_ARRAY_INSERT(@json, '$[0]', 5);
+-------------------------------------+
| JSON_ARRAY_INSERT(@json, '$[0]', 5) |
+-------------------------------------+
| [5, 1, 2, [3, 4]]                   |
+-------------------------------------+

SELECT JSON_ARRAY_INSERT(@json, '$[1]', 6);
+-------------------------------------+
| JSON_ARRAY_INSERT(@json, '$[1]', 6) |
+-------------------------------------+
| [1, 6, 2, [3, 4]]                   |
+-------------------------------------+

SELECT JSON_ARRAY_INSERT(@json, '$[1]', 6, '$[2]', 7);
+------------------------------------------------+
| JSON_ARRAY_INSERT(@json, '$[1]', 6, '$[2]', 7) |
+------------------------------------------------+
| [1, 6, 7, 2, [3, 4]]                           |
+------------------------------------------------+
JSON video tutorial
SET @obj1= '{ "a": 1, "b": 2, "c": 3}';
SET @obj2= '{"b" : 10, "c": 20, "d": 30}';
SELECT JSON_OBJECT_FILTER_KEYS (@obj1, JSON_ARRAY_INTERSECT(JSON_KEYS(@obj1), JSON_KEYS(@obj2)));
+-------------------------------------------------------------------------------------------+
| JSON_OBJECT_FILTER_KEYS (@obj1, JSON_ARRAY_INTERSECT(JSON_KEYS(@obj1), JSON_KEYS(@obj2))) |
+-------------------------------------------------------------------------------------------+
| {"b": 2, "c": 3}                                                                          |
+-------------------------------------------------------------------------------------------+
JSONPath expression
JSON video tutorial
JSON_EQUALS
JSON_INSERT
JSON_SET
JSON video tutorial
JSON_NORMALIZE
SELECT JSON_OBJECT("id", 1, "name", "Monty");
+---------------------------------------+
| JSON_OBJECT("id", 1, "name", "Monty") |
+---------------------------------------+
| {"id": 1, "name": "Monty"}            |
+---------------------------------------+
JSON_MAKE_OBJECT

BIT_OR

  • BIT_XOR

  • COUNT

  • MAX

  • MIN

  • STD

  • STDDEV

  • STDDEV_POP

  • STDDEV_SAMP

  • SUM

  • VAR_POP

  • VAR_SAMP

  • VARIANCE

  • This page is licensed: CC BY-SA / Gnu FDL

    aggregate functions
    COUNT()
    AVG
    BIT_AND
    Description

    JSON_KEY_VALUE extracts key/value pairs from a JSON object. The JSON path parameter is used to only return key/value pairs for matching JSON objects.

    Example

    JSON_KEY_VALUE() can be used as an argument to JSON_TABLE(), which allows adding the key to a result set.

    This page is licensed: CC BY-SA / Gnu FDL

    JSON_KEY_VALUE(obj, json_path)
    Description

    It is used to convert all JSON objects found in a JSON document to JSON arrays where each item in the outer array represents a single key-value pair from the object. It is used when we want not just common keys, but also common values. It can be used in conjunction with JSON_ARRAY_INTERSECT().

    Examples

    This page is licensed: CC BY-SA / Gnu FDL

    JSON_OBJECT_TO_ARRAY(Obj)
    if any argument is
    NULL
    .

    JSON_MERGE_PRESERVE is a synonym for JSON_MERGE, which has been deprecated.

    Unlike JSON_MERGE_PATCH, members with duplicate keys are preserved.

    Example

    See Also

    • JSON_MERGE_PATCH

    This page is licensed: CC BY-SA / Gnu FDL

    JSON_MERGE_PRESERVE(json_doc, json_doc[, json_doc] ...)
    if valid,
    0
    if not, and
    NULL
    if the argument is
    NULL
    .

    The JSON_VALID function is automatically used as a CHECK constraint for the JSON data type alias in order to ensure that a valid json document is inserted.

    The JSON_VALID function is not automatically used as a CHECK constraint for the JSON data type alias.

    Examples

    See Also

    • JSON video tutorial covering JSON_VALID.

    This page is licensed: CC BY-SA / Gnu FDL

    Excludes keys from nested sub-objects in the top level value. The resulting array will be empty if the selected object is empty.

    Returns NULL if any of the arguments are null, a given path does not locate an object, or if the json_doc argument is not an object.

    An error will occur if JSON document is invalid, the path is invalid or if the path contains a * or ** wildcard.

    Examples

    This page is licensed: CC BY-SA / Gnu FDL

    JSON_KEYS(json_doc[, path])
    if either of the
    json_doc
    or
    path
    arguments are null.

    An error occurs if the JSON document is invalid, or if any of the paths are invalid or contain a * or ** wildcard.

    JSON_INSERT can only insert data, while JSON_REPLACE can only update. JSON_SET can update or insert data.

    Examples

    See Also

    • JSON video tutorial covering JSON_INSERT.

    This page is licensed: CC BY-SA / Gnu FDL

    Description

    JSON_OVERLAPS() compares two json documents and returns true if they have at least one common key-value pair between two objects, array element common between two arrays, or array element common with scalar if one of the arguments is a scalar and other is an array. If two json documents are scalars, it returns true if they have same type and value.

    If none of the above conditions are satisfied then it returns false.

    Examples

    Partial match is considered as no-match.

    Examples

    This page is licensed: CC BY-SA / Gnu FDL

    JSON_OVERLAPS(json_doc1, json_doc2)
    if any of
    json_doc
    ,
    search_str
    or a path argument is
    NULL
    ; if the search string is not found, or if no path exists within the document.

    A warning will occur if the JSON document is not valid, any of the path arguments are not valid, if return_arg is neither one nor all, or if the escape character is not a constant. NULL will be returned.

    return_arg can be one of two values:

    • 'one: Terminates after finding the first match, so will return one path string. If there is more than one match, it is undefined which is considered first.

    • all: Returns all matching path strings, without duplicates. Multiple strings are autowrapped as an array. The order is undefined.

    Examples

    This page is licensed: CC BY-SA / Gnu FDL

    JSON_SEARCH(json_doc, return_arg, search_str[, escape_char[, path] ...])
    Description

    JSON_OBJECTAGG returns a JSON object containing key-value pairs. It takes two expressions that evaluate to a single value, or two column names, as arguments, the first used as a key, and the second as a value.

    The maximum returned length in bytes is determined by the group_concat_max_len server system variable.

    Returns NULL in the case of an error, or if the result contains no rows.

    JSON_OBJECTAGG cannot currently be used as a window function.

    Examples

    This page is licensed: CC BY-SA / Gnu FDL

    JSON_OBJECTAGG(key, value)
    Examples

    In the SET statement below, two escape characters are needed, as a single escape character would be applied by the SQL parser in the SET statement, and the escaped character would not form part of the saved value.

    This page is licensed: CC BY-SA / Gnu FDL

    JSON_VALUE(json_doc, path)
    SELECT json_value('{"key1":123}', '$.key1');
    +--------------------------------------+
    | json_value('{"key1":123}', '$.key1') |
    +--------------------------------------+
    | 123                                  |
    +--------------------------------------+
    
    SELECT json_value('{"key1": [1,2,3], "key1":123}', '$.key1');
    +-------------------------------------------------------+
    | json_value('{"key1": [1,2,3], "key1":123}', '$.key1') |
    +-------------------------------------------------------+
    | 123                                                   |
    +-------------------------------------------------------+
    if not given a valid JSON document, or if there is no match.

    Examples

    This page is licensed: CC BY-SA / Gnu FDL

    JSON_QUERY(json_doc, path)
    SELECT json_query('{"key1":{"a":1, "b":[1,2]}}', '$.key1');
    +-----------------------------------------------------+
    | json_query('{"key1":{"a":1, "b":[1,2]}}', '$.key1') |
    +-----------------------------------------------------+
    | {"a":1, "b":[1,2]}                                  |
    +-----------------------------------------------------+
    
    SELECT json_query('{"key1":123, "key1": [1,2,3]}', '$.key1');
    +-------------------------------------------------------+
    | json_query('{"key1":123, "key1": [1,2,3]}', '$.key1') |
    +-------------------------------------------------------+
    | [1,2,3]                                               |
    +-------------------------------------------------------+
    function accesses data from a following row in the same result set without the need for a self-join. The specific row is determined by the
    offset
    (default
    1
    ), which specifies the number of rows ahead the current row to use. An offset of
    0
    is the current row.

    Example

    See Also

    • LAG - Window function to access a previous row

    This page is licensed: CC BY-SA / Gnu FDL

    LEAD (expr[, offset]) OVER ( 
      [ PARTITION BY partition_expression ] 
      [ ORDER BY order_list ]
    )
    function accesses data from a previous row according to the
    ORDER BY
    clause without the need for a self-join. The specific row is determined by the
    offset
    (default
    1
    ), which specifies the number of rows behind the current row to use. An offset of
    0
    is the current row.

    Examples

    See Also

    • LEAD - Window function to access a following row

    This page is licensed: CC BY-SA / Gnu FDL

    LAG (expr[, offset]) OVER ( 
      [ PARTITION BY partition_expression ] 
      < ORDER BY order_list >
    )
    CREATE TABLE t1 (a INT, b INT);
    
    INSERT INTO t1 VALUES (1, 1),(2, 1), (1, 1),(2, 1), (3, 2),(2, 2),(2, 2),(2, 2);
    
    SELECT JSON_ARRAYAGG(a), JSON_ARRAYAGG(b) FROM t1;
    +-------------------+-------------------+
    | JSON_ARRAYAGG(a)  | JSON_ARRAYAGG(b)  |
    +-------------------+-------------------+
    | [1,2,1,2,3,2,2,2] | [1,1,1,1,2,2,2,2] |
    +-------------------+-------------------+
    
    SELECT JSON_ARRAYAGG(a), JSON_ARRAYAGG(b) FROM t1 GROUP BY b;
    +------------------+------------------+
    | JSON_ARRAYAGG(a) | JSON_ARRAYAGG(b) |
    +------------------+------------------+
    | [1,2,1,2]        | [1,1,1,1]        |
    | [3,2,2,2]        | [2,2,2,2]        |
    +------------------+------------------+
    SET @json = '[1, 2, [3, 4]]';
    
    SELECT JSON_ARRAY_APPEND(@json, '$[0]', 5)
    +-------------------------------------+
    | JSON_ARRAY_APPEND(@json, '$[0]', 5) |
    +-------------------------------------+
    | [[1, 5], 2, [3, 4]]                 |
    +-------------------------------------+
    
    SELECT JSON_ARRAY_APPEND(@json, '$[1]', 6);
    +-------------------------------------+
    | JSON_ARRAY_APPEND(@json, '$[1]', 6) |
    +-------------------------------------+
    | [1, [2, 6], [3, 4]]                 |
    +-------------------------------------+
    
    SELECT JSON_ARRAY_APPEND(@json, '$[1]', 6, '$[2]', 7);
    +------------------------------------------------+
    | JSON_ARRAY_APPEND(@json, '$[1]', 6, '$[2]', 7) |
    +------------------------------------------------+
    | [1, [2, 6], [3, 4, 7]]                         |
    +------------------------------------------------+
    
    SELECT JSON_ARRAY_APPEND(@json, '$', 5);
    +----------------------------------+
    | JSON_ARRAY_APPEND(@json, '$', 5) |
    +----------------------------------+
    | [1, 2, [3, 4], 5]                |
    +----------------------------------+
    
    SET @json = '{"A": 1, "B": [2], "C": [3, 4]}';
    
    SELECT JSON_ARRAY_APPEND(@json, '$.B', 5);
    +------------------------------------+
    | JSON_ARRAY_APPEND(@json, '$.B', 5) |
    +------------------------------------+
    | {"A": 1, "B": [2, 5], "C": [3, 4]} |
    +------------------------------------+
    SET @json = '{"A": 1, "B": [2], "C": [3, 4]}';
    
    SELECT JSON_CONTAINS_PATH(@json, 'one', '$.A', '$.D');
    +------------------------------------------------+
    | JSON_CONTAINS_PATH(@json, 'one', '$.A', '$.D') |
    +------------------------------------------------+
    |                                              1 |
    +------------------------------------------------+
    1 row in set (0.00 sec)
    
    SELECT JSON_CONTAINS_PATH(@json, 'all', '$.A', '$.D');
    +------------------------------------------------+
    | JSON_CONTAINS_PATH(@json, 'all', '$.A', '$.D') |
    +------------------------------------------------+
    |                                              0 |
    +------------------------------------------------+
    SELECT JSON_DEPTH('[]'), JSON_DEPTH('true'), JSON_DEPTH('{}');
    +------------------+--------------------+------------------+
    | JSON_DEPTH('[]') | JSON_DEPTH('true') | JSON_DEPTH('{}') |
    +------------------+--------------------+------------------+
    |                1 |                  1 |                1 |
    +------------------+--------------------+------------------+
    
    SELECT JSON_DEPTH('[1, 2, 3]'), JSON_DEPTH('[[], {}, []]');
    +-------------------------+----------------------------+
    | JSON_DEPTH('[1, 2, 3]') | JSON_DEPTH('[[], {}, []]') |
    +-------------------------+----------------------------+
    |                       2 |                          2 |
    +-------------------------+----------------------------+
    
    SELECT JSON_DEPTH('[1, 2, [3, 4, 5, 6], 7]');
    +---------------------------------------+
    | JSON_DEPTH('[1, 2, [3, 4, 5, 6], 7]') |
    +---------------------------------------+
    |                                     3 |
    +---------------------------------------+
    SELECT JSON_EXISTS('{"key1":"xxxx", "key2":[1, 2, 3]}', "$.key2");
    +------------------------------------------------------------+
    | JSON_EXISTS('{"key1":"xxxx", "key2":[1, 2, 3]}', "$.key2") |
    +------------------------------------------------------------+
    |                                                          1 |
    +------------------------------------------------------------+
    
    SELECT JSON_EXISTS('{"key1":"xxxx", "key2":[1, 2, 3]}', "$.key3");
    +------------------------------------------------------------+
    | JSON_EXISTS('{"key1":"xxxx", "key2":[1, 2, 3]}', "$.key3") |
    +------------------------------------------------------------+
    |                                                          0 |
    +------------------------------------------------------------+
    
    SELECT JSON_EXISTS('{"key1":"xxxx", "key2":[1, 2, 3]}', "$.key2[1]");
    +---------------------------------------------------------------+
    | JSON_EXISTS('{"key1":"xxxx", "key2":[1, 2, 3]}', "$.key2[1]") |
    +---------------------------------------------------------------+
    |                                                             1 |
    +---------------------------------------------------------------+
    
    SELECT JSON_EXISTS('{"key1":"xxxx", "key2":[1, 2, 3]}', "$.key2[10]");
    +----------------------------------------------------------------+
    | JSON_EXISTS('{"key1":"xxxx", "key2":[1, 2, 3]}', "$.key2[10]") |
    +----------------------------------------------------------------+
    |                                                              0 |
    +----------------------------------------------------------------+
    SELECT JSON_REMOVE('{"A": 1, "B": 2, "C": {"D": 3}}', '$.C');
    +-------------------------------------------------------+
    | JSON_REMOVE('{"A": 1, "B": 2, "C": {"D": 3}}', '$.C') |
    +-------------------------------------------------------+
    | {"A": 1, "B": 2}                                      |
    +-------------------------------------------------------+
    
    SELECT JSON_REMOVE('["A", "B", ["C", "D"], "E"]', '$[1]');
    +----------------------------------------------------+
    | JSON_REMOVE('["A", "B", ["C", "D"], "E"]', '$[1]') |
    +----------------------------------------------------+
    | ["A", ["C", "D"], "E"]                             |
    +----------------------------------------------------+
    CREATE TABLE student(course VARCHAR(10), mark int, name varchar(10));
    
    INSERT INTO student VALUES 
      ('Maths', 60, 'Thulile'),
      ('Maths', 60, 'Pritha'),
      ('Maths', 70, 'Voitto'),
      ('Maths', 55, 'Chun'),
      ('Biology', 60, 'Bilal'),
       ('Biology', 70, 'Roger');
    
    SELECT 
      RANK() OVER (PARTITION BY course ORDER BY mark DESC) AS rank, 
      DENSE_RANK() OVER (PARTITION BY course ORDER BY mark DESC) AS dense_rank, 
      ROW_NUMBER() OVER (PARTITION BY course ORDER BY mark DESC) AS row_num, 
      course, mark, name 
    FROM student ORDER BY course, mark DESC;
    +------+------------+---------+---------+------+---------+
    | rank | dense_rank | row_num | course  | mark | name    |
    +------+------------+---------+---------+------+---------+
    |    1 |          1 |       1 | Biology |   70 | Roger   |
    |    2 |          2 |       2 | Biology |   60 | Bilal   |
    |    1 |          1 |       1 | Maths   |   70 | Voitto  |
    |    2 |          2 |       2 | Maths   |   60 | Thulile |
    |    2 |          2 |       3 | Maths   |   60 | Pritha  |
    |    4 |          3 |       4 | Maths   |   55 | Chun    |
    +------+------------+---------+---------+------+---------+
    MEDIAN(<median-arg>) OVER ( [ PARTITION BY partition_expression] )
    PERCENTILE_CONT(0.5) WITHIN 
      GROUP (ORDER BY <median-arg>) OVER ( [ PARTITION BY partition_expression ])
    CREATE TABLE book_rating (name CHAR(30), star_rating TINYINT);
    
    INSERT INTO book_rating VALUES ('Lord of the Ladybirds', 5);
    INSERT INTO book_rating VALUES ('Lord of the Ladybirds', 3);
    INSERT INTO book_rating VALUES ('Lady of the Flies', 1);
    INSERT INTO book_rating VALUES ('Lady of the Flies', 2);
    INSERT INTO book_rating VALUES ('Lady of the Flies', 5);
    
    SELECT name, median(star_rating) OVER (PARTITION BY name) FROM book_rating;
    +-----------------------+----------------------------------------------+
    | name                  | median(star_rating) OVER (PARTITION BY name) |
    +-----------------------+----------------------------------------------+
    | Lord of the Ladybirds |                                 4.0000000000 |
    | Lord of the Ladybirds |                                 4.0000000000 |
    | Lady of the Flies     |                                 2.0000000000 |
    | Lady of the Flies     |                                 2.0000000000 |
    | Lady of the Flies     |                                 2.0000000000 |
    +-----------------------+----------------------------------------------+
    UPDATE t1 SET dyncol_blob=COLUMN_ADD(dyncol_blob, "column_name", "value") WHERE id=1;
    SET @j = '{ "A":1,"B":[2,3]}';
    
    SELECT @j;
    +--------------------+
    | @j                 |
    +--------------------+
    | { "A":1,"B":[2,3]} |
    +--------------------+
    
    SELECT JSON_DETAILED(@j);
    +------------------------------------------------------------+
    | JSON_DETAILED(@j)                                          |
    +------------------------------------------------------------+
    | {
        "A": 1,
        "B": 
        [
            2,
            3
        ]
    } |
    +------------------------------------------------------------+
    SET @json = '[1, 2, [3, 4]]';
    
    SELECT JSON_EXTRACT(@json, '$[1]');
    +-----------------------------+
    | JSON_EXTRACT(@json, '$[1]') |
    +-----------------------------+
    | 2                           |
    +-----------------------------+
    
    SELECT JSON_EXTRACT(@json, '$[2]');
    +-----------------------------+
    | JSON_EXTRACT(@json, '$[2]') |
    +-----------------------------+
    | [3, 4]                      |
    +-----------------------------+
    
    SELECT JSON_EXTRACT(@json, '$[2][1]');
    +--------------------------------+
    | JSON_EXTRACT(@json, '$[2][1]') |
    +--------------------------------+
    | 4                              |
    +--------------------------------+
    CREATE TABLE t1 (
     id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
     val JSON,
     /* other columns here */
     PRIMARY KEY (id)
    );
    ALTER TABLE t1
       ADD COLUMN jnorm JSON AS (JSON_NORMALIZE(val)) VIRTUAL,
       ADD UNIQUE KEY (jnorm);
    INSERT INTO t1 (val) VALUES ('{"name":"alice","color":"blue"}');
    INSERT INTO t1 (val) VALUES ('{ "color": "blue", "name": "alice" }');
    ERROR 1062 (23000): Duplicate entry '{"color":"blue","name":"alice"}' for key 'jnorm'
    SELECT JSON_REPLACE('{ "A": 1, "B": [2, 3]}', '$.B[1]', 4);
    +-----------------------------------------------------+
    | JSON_REPLACE('{ "A": 1, "B": [2, 3]}', '$.B[1]', 4) |
    +-----------------------------------------------------+
    | { "A": 1, "B": [2, 4]}                              |
    +-----------------------------------------------------+
    SELECT JSON_EQUALS('{"a"   :[1, 2, 3],"b":[4]}', '{"b":[4],"a":[1, 2, 3.0]}');
    +------------------------------------------------------------------------+
    | JSON_EQUALS('{"a"   :[1, 2, 3],"b":[4]}', '{"b":[4],"a":[1, 2, 3.0]}') |
    +------------------------------------------------------------------------+
    |                                                                      1 |
    +------------------------------------------------------------------------+
    
    SELECT JSON_EQUALS('{"a":[1, 2, 3]}', '{"a":[1, 2, 3.01]}');
    +------------------------------------------------------+
    | JSON_EQUALS('{"a":[1, 2, 3]}', '{"a":[1, 2, 3.01]}') |
    +------------------------------------------------------+
    |                                                    0 |
    +------------------------------------------------------+
    SELECT COUNT(*) OVER (ORDER BY column) FROM table;
    SELECT JSON_KEY_VALUE('[[1, {"key1":"val1", "key2":"val2"}, 3], 2, 3]', '$[0][1]');
    +-----------------------------------------------------------------------------+
    | JSON_KEY_VALUE('[[1, {"key1":"val1", "key2":"val2"}, 3], 2, 3]', '$[0][1]') |
    +-----------------------------------------------------------------------------+
    | [{"key": "key1", "value": "val1"}, {"key": "key2", "value": "val2"}]        |
    +-----------------------------------------------------------------------------+
    SELECT jt.* FROM JSON_TABLE(
    JSON_KEY_VALUE('[[1, {"key1":"val1", "key2":"val2"}, 3], 2, 3]', '$[0][1]'),'$[*]'
    COLUMNS (
    k VARCHAR(20) PATH '$.key',
    v VARCHAR(20) PATH '$.value',
    id FOR ORDINALITY )) AS jt;
    +------+------+------+
    | k    | v    | id   |
    +------+------+------+
    | key1 | val1 |    1 |
    | key2 | val2 |    2 |
    +------+------+------+
    SET @obj1= '{ "a": [1, 2, 3], "b": { "key1":"val1", "key2": {"key3":"val3"} }}';
    
    SELECT JSON_OBJECT_TO_ARRAY(@obj1);
    +-----------------------------------------------------------------------+
    | JSON_OBJECT_TO_ARRAY(@obj1)                                           |
    +-----------------------------------------------------------------------+
    | [["a", [1, 2, 3]], ["b", {"key1": "val1", "key2": {"key3": "val3"}}]] |
    +-----------------------------------------------------------------------+
    SET @json1 = '[1, 2]';
    SET @json2 = '[2, 3]';
    SELECT JSON_MERGE_PATCH(@json1,@json2),JSON_MERGE_PRESERVE(@json1,@json2);
    +---------------------------------+------------------------------------+
    | JSON_MERGE_PATCH(@json1,@json2) | JSON_MERGE_PRESERVE(@json1,@json2) |
    +---------------------------------+------------------------------------+
    | [2, 3]                          | [1, 2, 2, 3]                       |
    +---------------------------------+------------------------------------+
    JSON_VALID(value)
    SELECT JSON_VALID('{"id": 1, "name": "Monty"}');
    +------------------------------------------+
    | JSON_VALID('{"id": 1, "name": "Monty"}') |
    +------------------------------------------+
    |                                        1 |
    +------------------------------------------+
    
    SELECT JSON_VALID('{"id": 1, "name": "Monty", "oddfield"}');
    +------------------------------------------------------+
    | JSON_VALID('{"id": 1, "name": "Monty", "oddfield"}') |
    +------------------------------------------------------+
    |                                                    0 |
    +------------------------------------------------------+
    SELECT JSON_KEYS('{"A": 1, "B": {"C": 2}}');
    +--------------------------------------+
    | JSON_KEYS('{"A": 1, "B": {"C": 2}}') |
    +--------------------------------------+
    | ["A", "B"]                           |
    +--------------------------------------+
    
    SELECT JSON_KEYS('{"A": 1, "B": 2, "C": {"D": 3}}', '$.C');
    +-----------------------------------------------------+
    | JSON_KEYS('{"A": 1, "B": 2, "C": {"D": 3}}', '$.C') |
    +-----------------------------------------------------+
    | ["D"]                                               |
    +-----------------------------------------------------+
    JSON_INSERT(json_doc, path, val[, path, val] ...)
    SET @json = '{ "A": 0, "B": [1, 2]}';
    
    SELECT JSON_INSERT(@json, '$.C', '[3, 4]');
    +--------------------------------------+
    | JSON_INSERT(@json, '$.C', '[3, 4]')  |
    +--------------------------------------+
    | { "A": 0, "B": [1, 2], "C":"[3, 4]"} |
    +--------------------------------------+
    SELECT JSON_OVERLAPS('false', 'false');
    +---------------------------------+
    | JSON_OVERLAPS('false', 'false') |
    +---------------------------------+
    | 1                               |
    +---------------------------------+
    
    SELECT JSON_OVERLAPS('true', '["abc", 1, 2, true, false]');
    +----------------------------------------------------+
    | JSON_OVERLAPS('true','["abc", 1, 2, true, false]') |
    +----------------------------------------------------+
    | 1                                                  |
    +----------------------------------------------------+
    
    SELECT JSON_OVERLAPS('{"A": 1, "B": {"C":2}}', '{"A": 2, "B": {"C":2}}') AS is_overlap;
    +---------------------+
    | is_overlap          |
    +---------------------+
    | 1                   |
    +---------------------+
    SELECT JSON_OVERLAPS('[1, 2, true, false, null]', '[3, 4, [1]]') AS is_overlap;
    +--------------------- +
    | is_overlap           |
    +----------------------+
    | 0                    |
    +----------------------+
    SET @json = '["A", [{"B": "1"}], {"C":"AB"}, {"D":"BC"}]';
    
    SELECT JSON_SEARCH(@json, 'one', 'AB');
    +---------------------------------+
    | JSON_SEARCH(@json, 'one', 'AB') |
    +---------------------------------+
    | "$[2].C"                        |
    +---------------------------------+
    SELECT * FROM t1;
    +------+-------+
    | a    | b     |
    +------+-------+
    |    1 | Hello |
    |    1 | World |
    |    2 | This  |
    +------+-------+
    
    SELECT JSON_OBJECTAGG(a, b) FROM t1;
    +----------------------------------------+
    | JSON_OBJECTAGG(a, b)                   |
    +----------------------------------------+
    | {"1":"Hello", "1":"World", "2":"This"} |
    +----------------------------------------+
    SET @json = '{"key1":"60\\" Table", "key2":"1"}';
    
    SELECT JSON_VALUE(@json,'$.key1') AS Name , json_value(@json,'$.key2') as ID;
    +-----------+------+
    | Name      | ID   |
    +-----------+------+
    | 60" Table | 1    |
    +-----------+------+
    CREATE TABLE t1 (pk int primary key, a int, b int, c char(10), d decimal(10, 3), e real);
    
    INSERT INTO t1 VALUES
     ( 1, 0, 1,    'one',    0.1,  0.001),
     ( 2, 0, 2,    'two',    0.2,  0.002),
     ( 3, 0, 3,    'three',  0.3,  0.003),
     ( 4, 1, 2,    'three',  0.4,  0.004),
     ( 5, 1, 1,    'two',    0.5,  0.005),
     ( 6, 1, 1,    'one',    0.6,  0.006),
     ( 7, 2, NULL, 'n_one',  0.5,  0.007),
     ( 8, 2, 1,    'n_two',  NULL, 0.008),
     ( 9, 2, 2,    NULL,     0.7,  0.009),
     (10, 2, 0,    'n_four', 0.8,  0.010),
     (11, 2, 10,   NULL,     0.9,  NULL);
    
    SELECT pk, LEAD(pk) OVER (ORDER BY pk) AS l,
      LEAD(pk,1) OVER (ORDER BY pk) AS l1,
      LEAD(pk,2) OVER (ORDER BY pk) AS l2,
      LEAD(pk,0) OVER (ORDER BY pk) AS l0,
      LEAD(pk,-1) OVER (ORDER BY pk) AS lm1,
      LEAD(pk,-2) OVER (ORDER BY pk) AS lm2 
    FROM t1;
    +----+------+------+------+------+------+------+
    | pk | l    | l1   | l2   | l0   | lm1  | lm2  |
    +----+------+------+------+------+------+------+
    |  1 |    2 |    2 |    3 |    1 | NULL | NULL |
    |  2 |    3 |    3 |    4 |    2 |    1 | NULL |
    |  3 |    4 |    4 |    5 |    3 |    2 |    1 |
    |  4 |    5 |    5 |    6 |    4 |    3 |    2 |
    |  5 |    6 |    6 |    7 |    5 |    4 |    3 |
    |  6 |    7 |    7 |    8 |    6 |    5 |    4 |
    |  7 |    8 |    8 |    9 |    7 |    6 |    5 |
    |  8 |    9 |    9 |   10 |    8 |    7 |    6 |
    |  9 |   10 |   10 |   11 |    9 |    8 |    7 |
    | 10 |   11 |   11 | NULL |   10 |    9 |    8 |
    | 11 | NULL | NULL | NULL |   11 |   10 |    9 |
    +----+------+------+------+------+------+------+
    CREATE TABLE t1 (pk int primary key, a int, b int, c char(10), d decimal(10, 3), e real);
    
    INSERT INTO t1 VALUES
     ( 1, 0, 1,    'one',    0.1,  0.001),
     ( 2, 0, 2,    'two',    0.2,  0.002),
     ( 3, 0, 3,    'three',  0.3,  0.003),
     ( 4, 1, 2,    'three',  0.4,  0.004),
     ( 5, 1, 1,    'two',    0.5,  0.005),
     ( 6, 1, 1,    'one',    0.6,  0.006),
     ( 7, 2, NULL, 'n_one',  0.5,  0.007),
     ( 8, 2, 1,    'n_two',  NULL, 0.008),
     ( 9, 2, 2,    NULL,     0.7,  0.009),
     (10, 2, 0,    'n_four', 0.8,  0.010),
     (11, 2, 10,   NULL,     0.9,  NULL);
    
    SELECT pk, LAG(pk) OVER (ORDER BY pk) AS l,
      LAG(pk,1) OVER (ORDER BY pk) AS l1,
      LAG(pk,2) OVER (ORDER BY pk) AS l2,
      LAG(pk,0) OVER (ORDER BY pk) AS l0,
      LAG(pk,-1) OVER (ORDER BY pk) AS lm1,
      LAG(pk,-2) OVER (ORDER BY pk) AS lm2 
    FROM t1;
    +----+------+------+------+------+------+------+
    | pk | l    | l1   | l2   | l0   | lm1  | lm2  |
    +----+------+------+------+------+------+------+
    |  1 | NULL | NULL | NULL |    1 |    2 |    3 |
    |  2 |    1 |    1 | NULL |    2 |    3 |    4 |
    |  3 |    2 |    2 |    1 |    3 |    4 |    5 |
    |  4 |    3 |    3 |    2 |    4 |    5 |    6 |
    |  5 |    4 |    4 |    3 |    5 |    6 |    7 |
    |  6 |    5 |    5 |    4 |    6 |    7 |    8 |
    |  7 |    6 |    6 |    5 |    7 |    8 |    9 |
    |  8 |    7 |    7 |    6 |    8 |    9 |   10 |
    |  9 |    8 |    8 |    7 |    9 |   10 |   11 |
    | 10 |    9 |    9 |    8 |   10 |   11 | NULL |
    | 11 |   10 |   10 |    9 |   11 | NULL | NULL |
    +----+------+------+------+------+------+------+
    if the argument is null.

    An error will occur if the given value begins and ends with double quotes and is an invalid JSON string literal.

    If the given value is not a JSON string, value is passed through unmodified.

    Certain character sequences have special meanings within a string. Usually, a backslash is ignored, but the escape sequences in the table below are recognised by MariaDB, unless the SQL Mode is set to NO_BACKSLASH_ESCAPES .

    Escape sequence
    Character

    "

    Double quote (")

    \b

    Backslash

    \f

    Formfeed

    \n

    Newline (linefeed)

    \r

    Carriage return

    \t

    Tab

    Examples

    With the default SQL Mode:

    Setting NO_BACKSLASH_ESCAPES:

    This page is licensed: CC BY-SA / Gnu FDL

    if the argument is null.

    An error will occur if the argument is an invalid JSON value.

    The following is a complete list of the possible return types:

    Return type
    Value
    Example

    ARRAY

    JSON array

    [1, 2, {"key": "value"}]

    OBJECT

    JSON object

    {"key":"value"}

    BOOLEAN

    JSON true/false literals

    true, false

    DOUBLE

    A number with at least one floating point decimal.

    1.2

    Examples

    This page is licensed: CC BY-SA / Gnu FDL

    NTILE

    Distribute rows into buckets. This function divides the rows in an ordered partition into a specified number of approximately equal groups.

    Syntax

    NTILE (expr) OVER ( 
      [ PARTITION BY partition_expression ] 
      [ ORDER BY order_list ]
    )

    Description

    NTILE() is a that returns an integer indicating which group a given row falls into. The number of groups is specified in the argument (expr), starting at one. Ordered rows in the partition are divided into the specified number of groups with as equal a size as possible.

    Examples

    This page is licensed: CC BY-SA / Gnu FDL

    RANK

    Calculate rank with gaps. This function assigns a rank to each row, with tied values receiving the same rank and subsequent ranks skipped.

    Syntax

    RANK() OVER (
      [ PARTITION BY partition_expression ]
      [ ORDER BY order_list ]
    )

    Description

    RANK() is a window function that displays the number of a given row, starting at one and following the ORDER BY sequence of the window function, with identical values receiving the same result. It is similar to the function except that in that function, identical values will receive a different row number for each result.

    Examples

    The distinction between , RANK() and :

    See Also

    This page is licensed: CC BY-SA / Gnu FDL

    ROW_NUMBER

    Assign a sequential integer to rows. This function numbers rows within a partition starting at 1, based on the specified order.

    Syntax

    ROW_NUMBER() OVER (
      [ PARTITION BY partition_expression ]
      [ ORDER BY order_list ]
    )

    Description

    ROW_NUMBER() is a that displays the number of a given row, starting at one and following the sequence of the window function, with identical values receiving different row numbers. It is similar to the and functions except that in that function, identical values will receive the same rank for each result.

    Examples

    The distinction between , and ROW_NUMBER():

    See Also

    This page is licensed: CC BY-SA / Gnu FDL

    PERCENTILE_CONT

    Calculate a continuous percentile. This inverse distribution function returns an interpolated value that corresponds to the specified percentile.

    Syntax

    Description

    PERCENTILE_CONT()

    JSON_SCHEMA_VALID

    Discover JSON_SCHEMA_VALID in MariaDB. Available from version 11.1, this function validates a JSON document against a specified JSON Schema (Draft 2020), returning true if valid.

    JSON_SCHEMA_VALID is available from MariaDB 11.1.

    Syntax

    PERCENTILE_DISC

    Calculate a discrete percentile. This inverse distribution function returns an actual value from the dataset that corresponds to the specified percentile.

    Syntax

    Description

    PERCENTILE_DISC()

    JSON_UNQUOTE(val)
    SELECT JSON_UNQUOTE('"Monty"');
    +-------------------------+
    | JSON_UNQUOTE('"Monty"') |
    +-------------------------+
    | Monty                   |
    +-------------------------+
    SELECT JSON_UNQUOTE('Si\bng\ting');
    +-----------------------------+
    | JSON_UNQUOTE('Si\bng\ting') |
    +-----------------------------+
    | Sng	ing                   |
    +-----------------------------+
    SET @@sql_mode = 'NO_BACKSLASH_ESCAPES';
    
    SELECT JSON_UNQUOTE('Si\bng\ting');
    +-----------------------------+
    | JSON_UNQUOTE('Si\bng\ting') |
    +-----------------------------+
    | Si\bng\ting                 |
    +-----------------------------+
    JSON_TYPE(json_val)
    SELECT JSON_TYPE('{"A": 1, "B": 2, "C": 3}');
    +---------------------------------------+
    | JSON_TYPE('{"A": 1, "B": 2, "C": 3}') |
    +---------------------------------------+
    | OBJECT                                |
    +---------------------------------------+

    \

    Backslash ()

    \uXXXX

    UTF-8 bytes for Unicode value XXXX

    INTEGER

    A number without a floating point decimal.

    1

    NULL

    JSON null literal (this is returned as a string, not to be confused with the SQL NULL value!)

    null

    STRING

    JSON String

    "a sample string"

    window function
    ROW_NUMBER()
    DENSE_RANK()
    ROW_NUMBER()
    DENSE_RANK()
    ROW_NUMBER()
    ORDER BY
    window function
    ORDER BY
    RANK()
    DENSE_RANK()
    DENSE_RANK()
    RANK()
    RANK()
    DENSE_RANK()
    ORDER BY
    (standing for continuous percentile) is a
    which returns a value which corresponds to the given fraction in the sort order. If required, it will interpolate between adjacent input items.

    Essentially, the following process is followed to find the value to return:

    • Get the number of rows in the partition, denoted by N

    • RN = p*(N-1), where p denotes the argument to the PERCENTILE_CONT function

    • Calculate FRN as FRN=floor(RN) and CRN as CRN=ceil(RN)

    • Look up rows FRN and CRN

    • If (CRN = FRN = RN) then the result is (value of expression from row at RN)

    • Otherwise the result is

    • (CRN - RN) * (value of expression for row at FRN) +

    • (RN - FRN) * (value of expression for row at CRN)

    The MEDIAN function is a specific case of PERCENTILE_CONT, equivalent to PERCENTILE_CONT(0.5).

    Examples

    See Also

    • MEDIAN() - a special case of PERCENTILE_CONT equivalent to PERCENTILE_CONT(0.5)

    This page is licensed: CC BY-SA / Gnu FDL

    window function
    Description

    JSON_SCHEMA_VALID allows MariaDB to support JSON schema validation. If a given json is valid against a schema it returns true. When JSON does not validate against the schema, it does not return a message about which keyword it failed against and only returns false.

    The function supports JSON Schema Draft 2020 with a few exceptions:

    • External resources are not supported.

    • Hyper schema keywords are not supported.

    • Formats like date, email etc are treated as annotations.

    Examples

    To create validation rules for json field, do this:

    This page is licensed: CC BY-SA / Gnu FDL

    (standing for discrete percentile) is a
    which returns the first value in the set whose ordered position is the same or more than the specified fraction.

    Essentially, the following process is followed to find the value to return:

    • Get the number of rows in the partition.

    • Walk through the partition, in order, until finding the first row with CUME_DIST() >= function_argument.

    Examples

    See Also

    • CUME_DIST()

    This page is licensed: CC BY-SA / Gnu FDL

    window function
    create table t1 (
        pk int primary key,
        a int,
        b int
      );
    
    insert into t1 values
        (11 , 0, 10),
        (12 , 0, 10),
        (13 , 1, 10),
        (14 , 1, 10),
        (18 , 2, 10),
        (15 , 2, 20),
        (16 , 2, 20),
        (17 , 2, 20),
        (19 , 4, 20),
        (20 , 4, 20);
    
    select pk, a, b,
        ntile(1) over (order by pk)
      from t1;
    +----+------+------+-----------------------------+
    | pk | a    | b    | ntile(1) over (order by pk) |
    +----+------+------+-----------------------------+
    | 11 |    0 |   10 |                           1 |
    | 12 |    0 |   10 |                           1 |
    | 13 |    1 |   10 |                           1 |
    | 14 |    1 |   10 |                           1 |
    | 15 |    2 |   20 |                           1 |
    | 16 |    2 |   20 |                           1 |
    | 17 |    2 |   20 |                           1 |
    | 18 |    2 |   10 |                           1 |
    | 19 |    4 |   20 |                           1 |
    | 20 |    4 |   20 |                           1 |
    +----+------+------+-----------------------------+
    
    select pk, a, b,
        ntile(4) over (order by pk)
     from t1;
    +----+------+------+-----------------------------+
    | pk | a    | b    | ntile(4) over (order by pk) |
    +----+------+------+-----------------------------+
    | 11 |    0 |   10 |                           1 |
    | 12 |    0 |   10 |                           1 |
    | 13 |    1 |   10 |                           1 |
    | 14 |    1 |   10 |                           2 |
    | 15 |    2 |   20 |                           2 |
    | 16 |    2 |   20 |                           2 |
    | 17 |    2 |   20 |                           3 |
    | 18 |    2 |   10 |                           3 |
    | 19 |    4 |   20 |                           4 |
    | 20 |    4 |   20 |                           4 |
    +----+------+------+-----------------------------+
    CREATE TABLE student(course VARCHAR(10), mark int, name varchar(10));
    
    INSERT INTO student VALUES 
      ('Maths', 60, 'Thulile'),
      ('Maths', 60, 'Pritha'),
      ('Maths', 70, 'Voitto'),
      ('Maths', 55, 'Chun'),
      ('Biology', 60, 'Bilal'),
       ('Biology', 70, 'Roger');
    
    SELECT 
      RANK() OVER (PARTITION BY course ORDER BY mark DESC) AS rank, 
      DENSE_RANK() OVER (PARTITION BY course ORDER BY mark DESC) AS dense_rank, 
      ROW_NUMBER() OVER (PARTITION BY course ORDER BY mark DESC) AS row_num, 
      course, mark, name 
    FROM student ORDER BY course, mark DESC;
    +------+------------+---------+---------+------+---------+
    | rank | dense_rank | row_num | course  | mark | name    |
    +------+------------+---------+---------+------+---------+
    |    1 |          1 |       1 | Biology |   70 | Roger   |
    |    2 |          2 |       2 | Biology |   60 | Bilal   |
    |    1 |          1 |       1 | Maths   |   70 | Voitto  |
    |    2 |          2 |       2 | Maths   |   60 | Thulile |
    |    2 |          2 |       3 | Maths   |   60 | Pritha  |
    |    4 |          3 |       4 | Maths   |   55 | Chun    |
    +------+------------+---------+---------+------+---------+
    CREATE TABLE student(course VARCHAR(10), mark int, name varchar(10));
    
    INSERT INTO student VALUES 
      ('Maths', 60, 'Thulile'),
      ('Maths', 60, 'Pritha'),
      ('Maths', 70, 'Voitto'),
      ('Maths', 55, 'Chun'),
      ('Biology', 60, 'Bilal'),
       ('Biology', 70, 'Roger');
    
    SELECT 
      RANK() OVER (PARTITION BY course ORDER BY mark DESC) AS rank, 
      DENSE_RANK() OVER (PARTITION BY course ORDER BY mark DESC) AS dense_rank, 
      ROW_NUMBER() OVER (PARTITION BY course ORDER BY mark DESC) AS row_num, 
      course, mark, name 
    FROM student ORDER BY course, mark DESC;
    +------+------------+---------+---------+------+---------+
    | rank | dense_rank | row_num | course  | mark | name    |
    +------+------------+---------+---------+------+---------+
    |    1 |          1 |       1 | Biology |   70 | Roger   |
    |    2 |          2 |       2 | Biology |   60 | Bilal   |
    |    1 |          1 |       1 | Maths   |   70 | Voitto  |
    |    2 |          2 |       2 | Maths   |   60 | Thulile |
    |    2 |          2 |       3 | Maths   |   60 | Pritha  |
    |    4 |          3 |       4 | Maths   |   55 | Chun    |
    +------+------------+---------+---------+------+---------+
    CREATE TABLE book_rating (name CHAR(30), star_rating TINYINT);
    
    INSERT INTO book_rating VALUES ('Lord of the Ladybirds', 5);
    INSERT INTO book_rating VALUES ('Lord of the Ladybirds', 3);
    INSERT INTO book_rating VALUES ('Lady of the Flies', 1);
    INSERT INTO book_rating VALUES ('Lady of the Flies', 2);
    INSERT INTO book_rating VALUES ('Lady of the Flies', 5);
    
    SELECT name, PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY star_rating) 
      OVER (PARTITION BY name) AS pc 
      FROM book_rating;
    +-----------------------+--------------+
    | name                  | pc           |
    +-----------------------+--------------+
    | Lord of the Ladybirds | 4.0000000000 |
    | Lord of the Ladybirds | 4.0000000000 |
    | Lady of the Flies     | 2.0000000000 |
    | Lady of the Flies     | 2.0000000000 |
    | Lady of the Flies     | 2.0000000000 |
    +-----------------------+--------------+
    
    SELECT name, PERCENTILE_CONT(1) WITHIN GROUP (ORDER BY star_rating) 
      OVER (PARTITION BY name) AS pc 
      FROM book_rating;
    +-----------------------+--------------+
    | name                  | pc           |
    +-----------------------+--------------+
    | Lord of the Ladybirds | 5.0000000000 |
    | Lord of the Ladybirds | 5.0000000000 |
    | Lady of the Flies     | 5.0000000000 |
    | Lady of the Flies     | 5.0000000000 |
    | Lady of the Flies     | 5.0000000000 |
    +-----------------------+--------------+
    
    SELECT name, PERCENTILE_CONT(0) WITHIN GROUP (ORDER BY star_rating) 
      OVER (PARTITION BY name) AS pc 
      FROM book_rating;
    +-----------------------+--------------+
    | name                  | pc           |
    +-----------------------+--------------+
    | Lord of the Ladybirds | 3.0000000000 |
    | Lord of the Ladybirds | 3.0000000000 |
    | Lady of the Flies     | 1.0000000000 |
    | Lady of the Flies     | 1.0000000000 |
    | Lady of the Flies     | 1.0000000000 |
    +-----------------------+--------------+
    
    SELECT name, PERCENTILE_CONT(0.6) WITHIN GROUP (ORDER BY star_rating) 
      OVER (PARTITION BY name) AS pc 
      FROM book_rating;
    +-----------------------+--------------+
    | name                  | pc           |
    +-----------------------+--------------+
    | Lord of the Ladybirds | 4.2000000000 |
    | Lord of the Ladybirds | 4.2000000000 |
    | Lady of the Flies     | 2.6000000000 |
    | Lady of the Flies     | 2.6000000000 |
    | Lady of the Flies     | 2.6000000000 |
    +-----------------------+--------------+
    JSON_SCHEMA_VALID(schema, json);
    CREATE TABLE obj_table(val_obj JSON CHECK(JSON_SCHEMA_VALID('{
      "type":"object",
        "properties": {
          "number1":{
            "type":"number",
            "maximum":5,
            "const":4
          },
          "string1":{
            "type":"string",
            "maxLength":5,
            "minLength":3
          },
        "object1":{
          "type":"object",
           "properties":{
             "key1": {"type":"string"},
             "key2":{"type":"array"},
             "key3":{"type":"number", "minimum":3}
           },
           "dependentRequired": { "key1":["key3"] }
         }
      },
      "required":["number1","object1"]
      }', val_obj)));
    
    INSERT INTO obj_table VALUES(
      '{"number1":4, "string1":"abcd", 
      "object1":{"key1":"val1", "key2":[1,2,3, "string1"], "key3":4}}'
    );
    
    INSERT INTO obj_table VALUES(
      '{"number1":3, "string1":"abcd", 
      "object1":{"key1":"val1", "key2":[1,2,3, "string1"], "key3":4}}'
    );
    ERROR 4025 (23000): CONSTRAINT `obj_table.val_obj` failed for `test`.`obj_table`
    
    SELECT * FROM obj_table;
    +--------------------------------------------------------------------------------------------------+
    | val_obj                                                                                          |
    +--------------------------------------------------------------------------------------------------+
    | {"number1":4, "string1":"abcd", "object1":{"key1":"val1", "key2":[1,2,3, "string1"], "key3":4}} |
    +--------------------------------------------------------------------------------------------------+
    
    SET @schema= '{
      "properties" : {
        "number1":{ "maximum":10 },
        "string1" : { "maxLength": 3} 
      }
    }';
    
    SELECT JSON_SCHEMA_VALID(@schema, '{ "number1":25, "string1":"ab" }');
    +----------------------------------------------------------------+
    | JSON_SCHEMA_VALID(@schema, '{ "number1":25, "string1":"ab" }') |
    +----------------------------------------------------------------+
    |                                                              0 |
    +----------------------------------------------------------------+
    
    SELECT JSON_SCHEMA_VALID(@schema, '{ "number1":10, "string1":"ab" }');
    +----------------------------------------------------------------+
    | JSON_SCHEMA_VALID(@schema, '{ "number1":10, "string1":"ab" }') |
    +----------------------------------------------------------------+
    |                                                              1 |
    +----------------------------------------------------------------+
    CREATE TABLE book_rating (name CHAR(30), star_rating TINYINT);
    
    INSERT INTO book_rating VALUES ('Lord of the Ladybirds', 5);
    INSERT INTO book_rating VALUES ('Lord of the Ladybirds', 3);
    INSERT INTO book_rating VALUES ('Lady of the Flies', 1);
    INSERT INTO book_rating VALUES ('Lady of the Flies', 2);
    INSERT INTO book_rating VALUES ('Lady of the Flies', 5);
    
    SELECT name, PERCENTILE_DISC(0.5) WITHIN GROUP (ORDER BY star_rating)
      OVER (PARTITION BY name) AS pc FROM book_rating;
    +-----------------------+------+
    | name                  | pc   |
    +-----------------------+------+
    | Lord of the Ladybirds |    3 |
    | Lord of the Ladybirds |    3 |
    | Lady of the Flies     |    2 |
    | Lady of the Flies     |    2 |
    | Lady of the Flies     |    2 |
    +-----------------------+------+
    5 rows in set (0.000 sec)
    
    SELECT name, PERCENTILE_DISC(0) WITHIN GROUP (ORDER BY star_rating) 
     OVER (PARTITION BY name) AS pc FROM book_rating;
    +-----------------------+------+
    | name                  | pc   |
    +-----------------------+------+
    | Lord of the Ladybirds |    3 |
    | Lord of the Ladybirds |    3 |
    | Lady of the Flies     |    1 |
    | Lady of the Flies     |    1 |
    | Lady of the Flies     |    1 |
    +-----------------------+------+
    5 rows in set (0.000 sec)
    
    SELECT name, PERCENTILE_DISC(1) WITHIN GROUP (ORDER BY star_rating) 
      OVER (PARTITION BY name) AS pc FROM book_rating;
    +-----------------------+------+
    | name                  | pc   |
    +-----------------------+------+
    | Lord of the Ladybirds |    5 |
    | Lord of the Ladybirds |    5 |
    | Lady of the Flies     |    5 |
    | Lady of the Flies     |    5 |
    | Lady of the Flies     |    5 |
    +-----------------------+------+
    5 rows in set (0.000 sec)
    
    SELECT name, PERCENTILE_DISC(0.6) WITHIN GROUP (ORDER BY star_rating) 
      OVER (PARTITION BY name) AS pc FROM book_rating;
    +-----------------------+------+
    | name                  | pc   |
    +-----------------------+------+
    | Lord of the Ladybirds |    5 |
    | Lord of the Ladybirds |    5 |
    | Lady of the Flies     |    2 |
    | Lady of the Flies     |    2 |
    | Lady of the Flies     |    2 |
    +-----------------------+------

    JSONPath Expressions

    Understand JSONPath syntax. This guide explains how to use JSONPath to select and extract specific elements, objects, or arrays from JSON documents.

    A number of JSON functions accept JSON Path expressions. MariaDB defines this path as follows:

    JSON Path Syntax

    The path starts with an optional path mode. At the moment, MariaDB supports only the "lax" mode, which is also the mode that is used when it is not explicitly specified.

    The $ symbol represents the context item. The search always starts from the context item; because of that, the path always starts with $.

    Then, it is followed by zero or more steps, which select element(s) in the JSON document. A step may be one of the following:

    • Object member selector.

    • Array element selector.

    • Wildcard selector.

    Object Member Selector

    To select member(s) in a JSON object, one can use one of the following:

    • .memberName selects the value of the member with name memberName.

    • ."memberName" - the same as above but allows one to select a member with a name that's not a valid identifier (that is, has space, dot, and/or other characters).

    • .* - selects the values of all members of the object.

    If the current item is an array (instead of an object), nothing will be selected.

    Array Element Selector

    To select elements of an array, one can use one of the following:

    • [N] selects element number N in the array. The elements are counted from zero.

    • [*] selects all elements in the array.

    If the current item is an object (instead of an array), nothing will be selected.

    JSON path supports negative indexes in an array, 'last' keyword and range notation ('to' keyword) for accessing array elements. Negative indexes start from -1.

    • [-N] selects n th element from end.

    • [last-N] selects n th element from the last element.

    • [M to N] selects range of elements starting from index M to N.

    This produces output for first index of eighth from last element of a two dimensional array.

    Note: In range notation, when M > N ( when M,N are greater than or equal to 0) or (size of array - M or size of array - N when M, N are less than 0), then it is treated as an impossible range and NULL is returned.

    Wildcard

    The wildcard step, **, recursively selects all child elements of the current element. Both array elements and object members are selected.

    The wildcard step must not be the last step in the JSONPath expression. It must be followed by an array or object member selector step.

    For example, this query selects all object members named price in the document:

    This query, however, selects the second element in each of the arrays present in the document:

    Compatibility

    MariaDB's JSONPath syntax supports a subset of JSON Path's definition in the SQL Standard. The most notable things not supported are the strict mode and filters.

    MariaDB's JSONPath is close to MySQL's JSONPath. The wildcard step ( ** ) is a non-standard extension that has the same meaning as in MySQL. The difference between MariaDB and MySQL's JSONPath is: MySQL doesn't allow one to specify the mode explicitly (but uses lax mode implicitly).

    This page is licensed: CC BY-SA / Gnu FDL

    FIRST_VALUE

    Return the value from the first row in the window frame. This function retrieves the value of an expression from the first row of the defined window.

    Syntax

    FIRST_VALUE(expr) OVER (
      [ PARTITION BY partition_expression ]
      [ ORDER BY order_list ]
    )

    Description

    FIRST_VALUE returns the first result from an ordered set, or NULL if no such result exists.

    Examples

    See Also

    This page is licensed: CC BY-SA / Gnu FDL

    CUME_DIST

    Calculate cumulative distribution. This function returns the relative rank of the current row within its partition, calculated as the percentage of rows preceding or peer to it.

    Syntax

    CUME_DIST() OVER ( 
      [ PARTITION BY partition_expression ] 
      [ ORDER BY order_list ]
    )

    Description

    CUME_DIST() is a that returns the cumulative distribution of a given row. The following formula is used to calculate the value:

    Examples

    See Also

    This page is licensed: CC BY-SA / Gnu FDL

    PERCENT_RANK

    Calculate relative rank. This function returns the percentage rank of the current row within its partition, ranging from 0 to 1.

    Syntax

    Description

    PERCENT_RANK() is a

    JSON_MERGE

    Merge JSON documents. This function merges two or more JSON documents into a single JSON document, preserving all keys and values.

    Syntax

    Description

    Merges the given JSON documents.

    path : ['lax'] '$' [step]*
    LAST_VALUE
    (number of rows <= current row) / (total rows)
    window function
    PERCENT_RANK()

    Example:

    JSON path does not support negative indexes in an array.

    that returns the relative percent rank of a given row. The following formula is used to calculate the percent rank:

    Examples

    See Also

    • CUME_DIST()

    This page is licensed: CC BY-SA / Gnu FDL

    PERCENT_RANK() OVER (
      [ PARTITION BY partition_expression ] 
      [ ORDER BY order_list ]
    )
    window function
    Returns the merged result, or
    NULL
    if any argument is
    NULL
    .

    An error occurs if any of the arguments are not valid JSON documents.

    JSON_MERGE is deprecated. JSON_MERGE_PATCH is an RFC 7396-compliant replacement, and JSON_MERGE_PRESERVE is a synonym.

    Example

    See Also

    • JSON_MERGE_PATCH

    • JSON_MERGE_PRESERVE

    This page is licensed: CC BY-SA / Gnu FDL

    CREATE TABLE t1 (
      pk int primary key,
      a int,
      b int,
      c char(10),
      d decimal(10, 3),
      e real
    );
    
    INSERT INTO t1 VALUES
    ( 1, 0, 1,    'one',    0.1,  0.001),
    ( 2, 0, 2,    'two',    0.2,  0.002),
    ( 3, 0, 3,    'three',  0.3,  0.003),
    ( 4, 1, 2,    'three',  0.4,  0.004),
    ( 5, 1, 1,    'two',    0.5,  0.005),
    ( 6, 1, 1,    'one',    0.6,  0.006),
    ( 7, 2, NULL, 'n_one',  0.5,  0.007),
    ( 8, 2, 1,    'n_two',  NULL, 0.008),
    ( 9, 2, 2,    NULL,     0.7,  0.009),
    (10, 2, 0,    'n_four', 0.8,  0.010),
    (11, 2, 10,   NULL,     0.9,  NULL);
    
    SELECT pk, FIRST_VALUE(pk) OVER (ORDER BY pk) AS first_asc,
               LAST_VALUE(pk) OVER (ORDER BY pk) AS last_asc,
               FIRST_VALUE(pk) OVER (ORDER BY pk DESC) AS first_desc,
               LAST_VALUE(pk) OVER (ORDER BY pk DESC) AS last_desc
    FROM t1
    ORDER BY pk DESC;
    
    +----+-----------+----------+------------+-----------+
    | pk | first_asc | last_asc | first_desc | last_desc |
    +----+-----------+----------+------------+-----------+
    | 11 |         1 |       11 |         11 |        11 |
    | 10 |         1 |       10 |         11 |        10 |
    |  9 |         1 |        9 |         11 |         9 |
    |  8 |         1 |        8 |         11 |         8 |
    |  7 |         1 |        7 |         11 |         7 |
    |  6 |         1 |        6 |         11 |         6 |
    |  5 |         1 |        5 |         11 |         5 |
    |  4 |         1 |        4 |         11 |         4 |
    |  3 |         1 |        3 |         11 |         3 |
    |  2 |         1 |        2 |         11 |         2 |
    |  1 |         1 |        1 |         11 |         1 |
    +----+-----------+----------+------------+-----------+
    CREATE OR REPLACE TABLE t1 (i int);
    INSERT INTO t1 VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10);
    
    SELECT i,
      FIRST_VALUE(i) OVER (ORDER BY i ROWS BETWEEN CURRENT ROW and 1 FOLLOWING) AS f_1f,
      LAST_VALUE(i) OVER (ORDER BY i ROWS BETWEEN CURRENT ROW and 1 FOLLOWING) AS l_1f,
      FIRST_VALUE(i) OVER (ORDER BY i ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) AS f_1p1f,
      LAST_VALUE(i) OVER (ORDER BY i ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) AS l_1p1f,
      FIRST_VALUE(i) OVER (ORDER BY i ROWS BETWEEN 2 PRECEDING AND 1 PRECEDING) AS f_2p1p,
      LAST_VALUE(i) OVER (ORDER BY i ROWS BETWEEN 2 PRECEDING AND 1 PRECEDING) AS l_2p1p,
      FIRST_VALUE(i) OVER (ORDER BY i ROWS BETWEEN 1 FOLLOWING AND 2 FOLLOWING) AS f_1f2f,
      LAST_VALUE(i) OVER (ORDER BY i ROWS BETWEEN 1 FOLLOWING AND 2 FOLLOWING) AS l_1f2f
    FROM t1;
    
    +------+------+------+--------+--------+--------+--------+--------+--------+
    | i    | f_1f | l_1f | f_1p1f | l_1p1f | f_2p1p | l_2p1p | f_1f2f | l_1f2f |
    +------+------+------+--------+--------+--------+--------+--------+--------+
    |    1 |    1 |    2 |      1 |      2 |   NULL |   NULL |      2 |      3 |
    |    2 |    2 |    3 |      1 |      3 |      1 |      1 |      3 |      4 |
    |    3 |    3 |    4 |      2 |      4 |      1 |      2 |      4 |      5 |
    |    4 |    4 |    5 |      3 |      5 |      2 |      3 |      5 |      6 |
    |    5 |    5 |    6 |      4 |      6 |      3 |      4 |      6 |      7 |
    |    6 |    6 |    7 |      5 |      7 |      4 |      5 |      7 |      8 |
    |    7 |    7 |    8 |      6 |      8 |      5 |      6 |      8 |      9 |
    |    8 |    8 |    9 |      7 |      9 |      6 |      7 |      9 |     10 |
    |    9 |    9 |   10 |      8 |     10 |      7 |      8 |     10 |     10 |
    |   10 |   10 |   10 |      9 |     10 |      8 |      9 |   NULL |   NULL |
    +------+------+------+--------+--------+--------+--------+--------+--------+
    create table t1 (
      pk int primary key,
      a int,
      b int
    );
    
    
    insert into t1 values
    ( 1 , 0, 10),
    ( 2 , 0, 10),
    ( 3 , 1, 10),
    ( 4 , 1, 10),
    ( 8 , 2, 10),
    ( 5 , 2, 20),
    ( 6 , 2, 20),
    ( 7 , 2, 20),
    ( 9 , 4, 20),
    (10 , 4, 20);
    
    select pk, a, b,
        rank() over (order by a) as rank,
        percent_rank() over (order by a) as pct_rank,
        cume_dist() over (order by a) as cume_dist
    from t1;
    +----+------+------+------+--------------+--------------+
    | pk | a    | b    | rank | pct_rank     | cume_dist    |
    +----+------+------+------+--------------+--------------+
    |  1 |    0 |   10 |    1 | 0.0000000000 | 0.2000000000 |
    |  2 |    0 |   10 |    1 | 0.0000000000 | 0.2000000000 |
    |  3 |    1 |   10 |    3 | 0.2222222222 | 0.4000000000 |
    |  4 |    1 |   10 |    3 | 0.2222222222 | 0.4000000000 |
    |  5 |    2 |   20 |    5 | 0.4444444444 | 0.8000000000 |
    |  6 |    2 |   20 |    5 | 0.4444444444 | 0.8000000000 |
    |  7 |    2 |   20 |    5 | 0.4444444444 | 0.8000000000 |
    |  8 |    2 |   10 |    5 | 0.4444444444 | 0.8000000000 |
    |  9 |    4 |   20 |    9 | 0.8888888889 | 1.0000000000 |
    | 10 |    4 |   20 |    9 | 0.8888888889 | 1.0000000000 |
    +----+------+------+------+--------------+--------------+
    
    select pk, a, b,
           percent_rank() over (order by pk) as pct_rank,
           cume_dist() over (order by pk) as cume_dist
    from t1 order by pk;
    +----+------+------+--------------+--------------+
    | pk | a    | b    | pct_rank     | cume_dist    |
    +----+------+------+--------------+--------------+
    |  1 |    0 |   10 | 0.0000000000 | 0.1000000000 |
    |  2 |    0 |   10 | 0.1111111111 | 0.2000000000 |
    |  3 |    1 |   10 | 0.2222222222 | 0.3000000000 |
    |  4 |    1 |   10 | 0.3333333333 | 0.4000000000 |
    |  5 |    2 |   20 | 0.4444444444 | 0.5000000000 |
    |  6 |    2 |   20 | 0.5555555556 | 0.6000000000 |
    |  7 |    2 |   20 | 0.6666666667 | 0.7000000000 |
    |  8 |    2 |   10 | 0.7777777778 | 0.8000000000 |
    |  9 |    4 |   20 | 0.8888888889 | 0.9000000000 |
    | 10 |    4 |   20 | 1.0000000000 | 1.0000000000 |
    +----+------+------+--------------+--------------+
    
    select pk, a, b,
            percent_rank() over (partition by a order by a) as pct_rank,
            cume_dist() over (partition by a order by a) as cume_dist
    from t1;
    +----+------+------+--------------+--------------+
    | pk | a    | b    | pct_rank     | cume_dist    |
    +----+------+------+--------------+--------------+
    |  1 |    0 |   10 | 0.0000000000 | 1.0000000000 |
    |  2 |    0 |   10 | 0.0000000000 | 1.0000000000 |
    |  3 |    1 |   10 | 0.0000000000 | 1.0000000000 |
    |  4 |    1 |   10 | 0.0000000000 | 1.0000000000 |
    |  5 |    2 |   20 | 0.0000000000 | 1.0000000000 |
    |  6 |    2 |   20 | 0.0000000000 | 1.0000000000 |
    |  7 |    2 |   20 | 0.0000000000 | 1.0000000000 |
    |  8 |    2 |   10 | 0.0000000000 | 1.0000000000 |
    |  9 |    4 |   20 | 0.0000000000 | 1.0000000000 |
    | 10 |    4 |   20 | 0.0000000000 | 1.0000000000 |
    +----+------+------+--------------+--------------+
    SET @json= '[1, 2, 3, 4, 5]';
    SELECT JSON_EXTRACT(@json, '$[4 to 2]');
    +-----------------------------------+
    | JSON_EXTRACT(@json, '$[4 to 2]')  |
    +-----------------------------------+
    | NULL                              |
    +-----------------------------------+
    SELECT json_extract(@json_doc, '$**.price');
    SELECT json_extract(@json_doc, '$**[2]');
    SET @json='{
                "A": [0,
                      [1, 2, 3],
                      [4, 5, 6],
                      "seven",
                       0.8,
                       true,
                       false,
                       "eleven",
                      [12, [13, 14], {"key1":"value1"},[15]],
                      true],
                "B": {"C": 1},
                "D": 2
               }';
    SELECT JSON_EXTRACT(@json, '$.A[-8][1]');
    +--------------------------------------------------+
    | JSON_EXTRACT(@json, '$.A[-8][1]')                |
    +--------------------------------------------------+
    | 5                                                |
    +--------------------------------------------------+
    
    SELECT JSON_EXTRACT(@json, '$.A[last-7][1]');
    +-----------------------------------------------+
    | SELECT JSON_EXTRACT(@json, '$.A[last-7][1]'); |
    +-----------------------------------------------+
    | 5                                             |
    +-----------------------------------------------+
    
    
    SET @json= '[
                 [1, {"key1": "value1"}, 3],
                 [false, 5, 6],
                 [7, 8, [9, {"key2": 2}, 11]],
                 [15, 1.34, [14], ["string1", [16, {"key1":[1,2,3,[4,5,6]]}, 18]]],
                 [19, 20],
                 21, 22
                ]';
    
    SELECT JSON_EXTRACT(@json, '$[0 to 3][2]');
    +-----------------------------------------------+
    | JSON_EXTRACT(@json, '$[0 to 3][2]')           |
    +-----------------------------------------------+
    | [3, 6, [9, {"key2": 2}, 11], [14]]            |
    +-----------------------------------------------+
    (rank - 1) / (number of rows in the window or partition - 1)
    create table t1 (
      pk int primary key,
      a int,
      b int
    );
    
    
    insert into t1 values
    ( 1 , 0, 10),
    ( 2 , 0, 10),
    ( 3 , 1, 10),
    ( 4 , 1, 10),
    ( 8 , 2, 10),
    ( 5 , 2, 20),
    ( 6 , 2, 20),
    ( 7 , 2, 20),
    ( 9 , 4, 20),
    (10 , 4, 20);
    
    select pk, a, b,
        rank() over (order by a) as rank,
        percent_rank() over (order by a) as pct_rank,
        cume_dist() over (order by a) as cume_dist
    from t1;
    +----+------+------+------+--------------+--------------+
    | pk | a    | b    | rank | pct_rank     | cume_dist    |
    +----+------+------+------+--------------+--------------+
    |  1 |    0 |   10 |    1 | 0.0000000000 | 0.2000000000 |
    |  2 |    0 |   10 |    1 | 0.0000000000 | 0.2000000000 |
    |  3 |    1 |   10 |    3 | 0.2222222222 | 0.4000000000 |
    |  4 |    1 |   10 |    3 | 0.2222222222 | 0.4000000000 |
    |  5 |    2 |   20 |    5 | 0.4444444444 | 0.8000000000 |
    |  6 |    2 |   20 |    5 | 0.4444444444 | 0.8000000000 |
    |  7 |    2 |   20 |    5 | 0.4444444444 | 0.8000000000 |
    |  8 |    2 |   10 |    5 | 0.4444444444 | 0.8000000000 |
    |  9 |    4 |   20 |    9 | 0.8888888889 | 1.0000000000 |
    | 10 |    4 |   20 |    9 | 0.8888888889 | 1.0000000000 |
    +----+------+------+------+--------------+--------------+
    
    select pk, a, b,
           percent_rank() over (order by pk) as pct_rank,
           cume_dist() over (order by pk) as cume_dist
    from t1 order by pk;
    +----+------+------+--------------+--------------+
    | pk | a    | b    | pct_rank     | cume_dist    |
    +----+------+------+--------------+--------------+
    |  1 |    0 |   10 | 0.0000000000 | 0.1000000000 |
    |  2 |    0 |   10 | 0.1111111111 | 0.2000000000 |
    |  3 |    1 |   10 | 0.2222222222 | 0.3000000000 |
    |  4 |    1 |   10 | 0.3333333333 | 0.4000000000 |
    |  5 |    2 |   20 | 0.4444444444 | 0.5000000000 |
    |  6 |    2 |   20 | 0.5555555556 | 0.6000000000 |
    |  7 |    2 |   20 | 0.6666666667 | 0.7000000000 |
    |  8 |    2 |   10 | 0.7777777778 | 0.8000000000 |
    |  9 |    4 |   20 | 0.8888888889 | 0.9000000000 |
    | 10 |    4 |   20 | 1.0000000000 | 1.0000000000 |
    +----+------+------+--------------+--------------+
    
    select pk, a, b,
            percent_rank() over (partition by a order by a) as pct_rank,
            cume_dist() over (partition by a order by a) as cume_dist
    from t1;
    +----+------+------+--------------+--------------+
    | pk | a    | b    | pct_rank     | cume_dist    |
    +----+------+------+--------------+--------------+
    |  1 |    0 |   10 | 0.0000000000 | 1.0000000000 |
    |  2 |    0 |   10 | 0.0000000000 | 1.0000000000 |
    |  3 |    1 |   10 | 0.0000000000 | 1.0000000000 |
    |  4 |    1 |   10 | 0.0000000000 | 1.0000000000 |
    |  5 |    2 |   20 | 0.0000000000 | 1.0000000000 |
    |  6 |    2 |   20 | 0.0000000000 | 1.0000000000 |
    |  7 |    2 |   20 | 0.0000000000 | 1.0000000000 |
    |  8 |    2 |   10 | 0.0000000000 | 1.0000000000 |
    |  9 |    4 |   20 | 0.0000000000 | 1.0000000000 |
    | 10 |    4 |   20 | 0.0000000000 | 1.0000000000 |
    +----+------+------+--------------+--------------+
    JSON_MERGE(json_doc, json_doc[, json_doc] ...)
    SET @json1 = '[1, 2]';
    SET @json2 = '[3, 4]';
    
    SELECT JSON_MERGE(@json1,@json2);
    +---------------------------+
    | JSON_MERGE(@json1,@json2) |
    +---------------------------+
    | [1, 2, 3, 4]              |
    +---------------------------+

    NTH_VALUE

    Return the value from the N-th row. This function retrieves the value of an expression from the N-th row within the window frame.

    Syntax

    NTH_VALUE (expr[, num_row]) OVER ( 
      [ PARTITION BY partition_expression ] 
      [ ORDER BY order_list ]
    )

    Description

    The NTH_VALUE function returns the value evaluated at row number num_row of the window frame, starting from 1, or NULL if the row does not exist.

    This page is licensed: CC BY-SA / Gnu FDL

    Window Frames

    Define subsets of rows for calculation. Window frames specify which rows relative to the current row are included in the window function's calculation.

    Syntax

    Description

    A basic overview of is described in . Window frames expand this functionality by allowing the function to include a specified a number of rows around the current row.

    These include:

    • All rows before the current row (UNBOUNDED PRECEDING), for example RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW .

    • All rows after the current row (UNBOUNDED FOLLOWING), for example RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING .

    • A set number of rows before the current row (expr PRECEDING) for example RANGE BETWEEN 6 PRECEDING AND CURRENT ROW .

    • A set number of rows after the current row (expr PRECEDING AND expr FOLLOWING) for example RANGE BETWEEN CURRENT ROW AND 2 FOLLOWING .

    • A specified number of rows both before and after the current row, for example RANGE BETWEEN 6 PRECEDING AND 3 FOLLOWING .

    The following functions operate on window frames:

    • AVG

    • BIT_AND

    • BIT_OR

    • BIT_XOR

    Window frames are determined by the frame_clause in the window function request.

    Take the following example:

    By not specifying an OVER clause, the SUM function is run over the entire dataset. However, if we specify an ORDER BY condition based on score (and order the entire result in the same way for clarity), the following result is returned:

    The total_score column represents a running total of the current row, and all previous rows. The window frame in this example expands as the function proceeds.

    The above query makes use of the default to define the window frame. It could be written explicitly as follows:

    Let's look at some alternatives:

    Firstly, applying the window function to the current row and all following rows can be done with the use of UNBOUNDED FOLLOWING:

    It's possible to specify a number of rows, rather than the entire unbounded following or preceding set. The following example takes the current row, as well as the previous row:

    The current row and the following row:

    This page is licensed: CC BY-SA / Gnu FDL

    window functions
    Window Functions Overview

    JSON_TABLE

    Convert JSON data to a relational table. This table function extracts data from a JSON document and returns it as a relational table with columns.

    JSON_TABLE is available from MariaDB 10.6.

    JSON_TABLE is a table function that converts JSON data into a relational form.

    frame_clause:
      {ROWS | RANGE} {frame_border | BETWEEN frame_border AND frame_border}
    
    frame_border:
      | UNBOUNDED PRECEDING
      | UNBOUNDED FOLLOWING
      | CURRENT ROW
      | expr PRECEDING
      | expr FOLLOWING
    CREATE TABLE `student_test` (
      name char(10),
      test char(10),
      score tinyint(4)
    );
    
    INSERT INTO student_test VALUES 
        ('Chun', 'SQL', 75), ('Chun', 'Tuning', 73), 
        ('Esben', 'SQL', 43), ('Esben', 'Tuning', 31), 
        ('Kaolin', 'SQL', 56), ('Kaolin', 'Tuning', 88), 
        ('Tatiana', 'SQL', 87);
    
    SELECT name, test, score, SUM(score) 
      OVER () AS total_score 
      FROM student_test;
    +---------+--------+-------+-------------+
    | name    | test   | score | total_score |
    +---------+--------+-------+-------------+
    | Chun    | SQL    |    75 |         453 |
    | Chun    | Tuning |    73 |         453 |
    | Esben   | SQL    |    43 |         453 |
    | Esben   | Tuning |    31 |         453 |
    | Kaolin  | SQL    |    56 |         453 |
    | Kaolin  | Tuning |    88 |         453 |
    | Tatiana | SQL    |    87 |         453 |
    +---------+--------+-------+-------------+
    SELECT name, test, score, SUM(score) 
      OVER (ORDER BY score) AS total_score 
      FROM student_test ORDER BY score;
    +---------+--------+-------+-------------+
    | name    | test   | score | total_score |
    +---------+--------+-------+-------------+
    | Esben   | Tuning |    31 |          31 |
    | Esben   | SQL    |    43 |          74 |
    | Kaolin  | SQL    |    56 |         130 |
    | Chun    | Tuning |    73 |         203 |
    | Chun    | SQL    |    75 |         278 |
    | Tatiana | SQL    |    87 |         365 |
    | Kaolin  | Tuning |    88 |         453 |
    +---------+--------+-------+-------------+
    SELECT name, test, score, SUM(score) 
      OVER (ORDER BY score RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS total_score 
      FROM student_test ORDER BY score;
    +---------+--------+-------+-------------+
    | name    | test   | score | total_score |
    +---------+--------+-------+-------------+
    | Esben   | Tuning |    31 |          31 |
    | Esben   | SQL    |    43 |          74 |
    | Kaolin  | SQL    |    56 |         130 |
    | Chun    | Tuning |    73 |         203 |
    | Chun    | SQL    |    75 |         278 |
    | Tatiana | SQL    |    87 |         365 |
    | Kaolin  | Tuning |    88 |         453 |
    +---------+--------+-------+-------------+
    SELECT name, test, score, SUM(score) 
      OVER (ORDER BY score RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS total_score 
      FROM student_test ORDER BY score;
    +---------+--------+-------+-------------+
    | name    | test   | score | total_score |
    +---------+--------+-------+-------------+
    | Esben   | Tuning |    31 |         453 |
    | Esben   | SQL    |    43 |         422 |
    | Kaolin  | SQL    |    56 |         379 |
    | Chun    | Tuning |    73 |         323 |
    | Chun    | SQL    |    75 |         250 |
    | Tatiana | SQL    |    87 |         175 |
    | Kaolin  | Tuning |    88 |          88 |
    +---------+--------+-------+-------------+
    SELECT name, test, score, SUM(score) 
      OVER (ORDER BY score ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) AS total_score 
      FROM student_test ORDER BY score;
    +---------+--------+-------+-------------+
    | name    | test   | score | total_score |
    +---------+--------+-------+-------------+
    | Esben   | Tuning |    31 |          31 |
    | Esben   | SQL    |    43 |          74 |
    | Kaolin  | SQL    |    56 |          99 |
    | Chun    | Tuning |    73 |         129 |
    | Chun    | SQL    |    75 |         148 |
    | Tatiana | SQL    |    87 |         162 |
    | Kaolin  | Tuning |    88 |         175 |
    +---------+--------+-------+-------------+
    SELECT name, test, score, SUM(score) 
      OVER (ORDER BY score ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) AS total_score 
      FROM student_test ORDER BY score;
    +---------+--------+-------+-------------+
    | name    | test   | score | total_score |
    +---------+--------+-------+-------------+
    | Esben   | Tuning |    31 |          74 |
    | Esben   | SQL    |    43 |         130 |
    | Kaolin  | SQL    |    56 |         172 |
    | Chun    | Tuning |    73 |         204 |
    | Chun    | SQL    |    75 |         235 |
    | Tatiana | SQL    |    87 |         250 |
    | Kaolin  | Tuning |    88 |         175 |
    +---------+--------+-------+-------------+
    COUNT
    LEAD
    MAX
    MIN
    NTILE
    STD
    STDDEV
    STDDEV_POP
    STDDEV_SAMP
    SUM
    VAR_POP
    VAR_SAMP
    VARIANCE
    Syntax

    Description

    JSON_TABLE can be used in contexts where a table reference can be used; in the FROM clause of a SELECT statement, and in multi-table UPDATE/DELETE statements.

    json_doc is the JSON document to extract data from. In the simplest case, it is a string literal containing JSON. In more complex cases it can be an arbitrary expression returning JSON. The expression may have references to columns of other tables. However, one can only refer to tables that precede this JSON_TABLE invocation. For RIGHT JOIN, it is assumed that its outer side precedes the inner. All tables in outer selects are also considered preceding.

    context_path is a JSON Path expression pointing to a collection of nodes in json_doc that will be used as the source of rows.

    The COLUMNS clause declares the names and types of the columns that JSON_TABLE returns, as well as how the values of the columns are produced.

    Column Definitions

    The following types of columns are supported:

    Path Columns

    Locates the JSON node pointed to by path_str and returns its value. The path_str is evaluated using the current row source node as the context node.

    The on_empty and on_error clauses specify the actions to be performed when the value was not found or there was an error condition. See the ON EMPTY and ON ERROR clauses section for details.

    ORDINALITY Columns

    Counts the rows, starting from 1.

    Example:

    EXISTS PATH Columns

    Checks whether the node pointed to by value_path exists. The value_path is evaluated using the current row source node as the context node.

    NESTED PATHs

    NESTED PATH converts nested JSON structures into multiple rows.

    It finds the sequence of JSON nodes pointed to by path and uses it to produce rows. For each found node, a row is generated with column values as specified by the NESTED PATH's COLUMNS clause. If path finds no nodes, only one row is generated with all columns having NULL values.

    For example, consider a JSON document that contains an array of items, and each item, in turn, is expected to have an array of its available sizes:

    NESTED PATH allows one to produce a separate row for each size each item has:

    NESTED PATH clauses can be nested within one another. They can also be located next to each other. In that case, the nested path clauses will produce records one at a time. The ones that are not producing records will have all columns set to NULL.

    Example:

    ON EMPTY and ON ERROR Clauses

    The ON EMPTY clause specifies what will be done when the element specified by the search path is missing in the JSON document.

    When ON EMPTY clause is not present, NULL ON EMPTY is implied.

    The ON ERROR clause specifies what should be done if a JSON structure error occurs when trying to extract the value pointed to by the path expression. A JSON structure error here occurs only when one attempts to convert a JSON non-scalar (array or object) into a scalar value. When the ON ERROR clause is not present, NULL ON ERROR is implied.

    Note: A datatype conversion error (e.g. attempt to store a non-integer value into an integer field, or a varchar column being truncated) is not considered a JSON error and so will not trigger the ON ERROR behavior. It will produce warnings, in the same way as CAST(value AS datatype) would.

    Replication

    In the current code, evaluation of JSON_TABLE is deterministic, that is, for a given input string JSON_TABLE will always produce the same set of rows in the same order. However, one can think of JSON documents that one can consider identical which will produce different output. In order to be future-proof and withstand changes like

    • sorting JSON object members by name (like MySQL does);

    • changing the way duplicate object members are handled the function is marked as unsafe for statement-based replication.

    Extracting a Subdocument into a Column

    JSON_TABLE does not allow to extract a JSON "subdocument" into a JSON column.

    See Also

    • JSON Support (video)

    This page is licensed: CC BY-SA / Gnu FDL

    SELECT * FROM JSON_TABLE('{"foo": [1,2,3,4]}','$' columns( jscol json path '$.foo') ) AS T;
    +-----------+
    | jscol     |
    +-----------+
    | [1,2,3,4] |
    +-----------+
    SELECT * FROM JSON_TABLE('{"foo": [1,2,3,4]}','$' columns( jscol json path '$.foo') ) AS T;
    +-------+
    | jscol |
    +-------+
    | NULL  |
    +-------+
    JSON_TABLE(json_doc, 
              context_path COLUMNS (column_list)
    ) [AS] alias
    column_list:
        column[, column][, ...]
    column:
        name FOR ORDINALITY
        |  name type PATH path_str [on_empty] [on_error]
        |  name type EXISTS PATH path_str
        |  NESTED PATH path_str COLUMNS (column_list)
    on_empty:
        {NULL | DEFAULT string | ERROR} ON EMPTY
    on_error:
        {NULL | DEFAULT string | ERROR} ON ERROR
    name type PATH path_str [on_empty] [on_error]
    SET @json='
    [
      {"name":"Laptop", "color":"black", "price":"1000"},
      {"name":"Jeans",  "color":"blue"}
    ]';
    
    SELECT * FROM json_table(@json, '$[*]' 
      COLUMNS(
       name  VARCHAR(10) path '$.name', 
       color VARCHAR(10) path '$.color',
       price DECIMAL(8,2) path '$.price' ) 
    ) AS jt;
    +--------+-------+---------+
    | name   | color | price   |
    +--------+-------+---------+
    | Laptop | black | 1000.00 |
    | Jeans  | blue  |    NULL |
    +--------+-------+---------+
    name FOR ORDINALITY
    set @json='
    [
      {"name":"Laptop", "color":"black"},
      {"name":"Jeans",  "color":"blue"}
    ]';
    
    select * from json_table(@json, '$[*]' 
      columns(
       id for ordinality, 
       name  varchar(10) path '$.name')
    ) as jt;
    +------+--------+
    | id   | name   |
    +------+--------+
    |    1 | Laptop |
    |    2 | Jeans  |
    +------+--------+
    name type EXISTS PATH path_str
    set @json='
    [
      {"name":"Laptop", "color":"black", "price":1000},
      {"name":"Jeans",  "color":"blue"}
    ]';
    
    select * from json_table(@json, '$[*]' 
      columns(
       name  varchar(10) path '$.name',
       has_price integer exists path '$.price')
    ) as jt;
    +--------+-----------+
    | name   | has_price |
    +--------+-----------+
    | Laptop |         1 |
    | Jeans  |         0 |
    +--------+-----------+
    NESTED PATH path COLUMNS (column_list)
    SET @json='
    [
      {"name":"Jeans",  "sizes": [32, 34, 36]},
      {"name":"T-Shirt", "sizes":["Medium", "Large"]},
      {"name":"Cellphone"}
    ]';
    select * from json_table(@json, '$[*]' 
      columns(
        name  varchar(10) path '$.name', 
        nested path '$.sizes[*]' columns (
          size varchar(32) path '$'
        )
      )
    ) as jt;
    +-----------+--------+
    | name      | size   |
    +-----------+--------+
    | Jeans     | 32     |
    | Jeans     | 34     |
    | Jeans     | 36     |
    | T-Shirt   | Medium |
    | T-Shirt   | Large  |
    | Cellphone | NULL   |
    +-----------+--------+
    set @json='
    [
      {"name":"Jeans",  "sizes": [32, 34, 36], "colors":["black", "blue"]}
    ]';
    
    select * from json_table(@json, '$[*]' 
      columns(
        name  varchar(10) path '$.name', 
        nested path '$.sizes[*]' columns (
          size varchar(32) path '$'
        ),
        nested path '$.colors[*]' columns (
          color varchar(32) path '$'
        )
      )
    ) as jt;
    
    +-------+------+-------+
    | name  | size | color |
    +-------+------+-------+
    | Jeans | 32   | NULL  |
    | Jeans | 34   | NULL  |
    | Jeans | 36   | NULL  |
    | Jeans | NULL | black |
    | Jeans | NULL | blue  |
    +-------+------+-------+
    on_empty:
        {NULL | DEFAULT string | ERROR} ON EMPTY
    on_error:
        {NULL | DEFAULT string | ERROR} ON ERROR

    Window Functions Overview

    Understand window function concepts. These functions operate on a set of rows (window) defined by an OVER clause, returning a value for each row without collapsing results.

    Introduction

    Window functions allow calculations to be performed across a set of rows related to the current row.

    Syntax

    Description

    Window functions perform calculations across a set of rows (in a defined window).

    Dedicated window functions include

    that can also be used as window functions include

    Window function queries are characterised by the OVER keyword, following which the set of rows used for the calculation is specified. By default, the set of rows used for the calculation (the "window) is the entire dataset, which can be ordered with the ORDER BY clause. The PARTITION BY clause is used to reduce the window to a particular group within the dataset.

    Consider the following data:

    The following two queries return the average partitioned by test and by name respectively:

    It is also possible to specify which rows to include for the window function (for example, the current row and all preceding rows). See for more details.

    Scope

    Window functions were introduced in SQL:2003, and their definition was expanded in subsequent versions of the standard. The last expansion was in the latest version of the standard, SQL:2011.

    Most database products support a subset of the standard, they implement some functions defined as late as in SQL:2011, and at the same time leave some parts of SQL:2008 unimplemented.

    MariaDB:

    • Supports ROWS and RANGE-type frames

      • All kinds of frame bounds are supported, including RANGE PRECEDING|FOLLOWING n frame bounds (unlike PostgreSQL or MS SQL Server)

      • Does not yet support DATE[TIME] datatype and arithmetic for RANGE-type frames ()

    Links

    • is the main jira task for window functions development. Other tasks are attached as sub-tasks.

    • is the feature tree for window functions. Development is ongoing, and this tree has the newest changes.

    • Test cases are in mysql-test/t/win*.test .

    Examples

    Given the following sample data:

    First, let's order the records by email alphabetically, giving each an ascending rnum value starting with 1. This will make use of the window function:

    We can generate separate sequences based on account type, using the PARTITION BY clause:

    Given the following structure and data, we want to find the top 5 salaries from each department.

    We could do this without using window functions, as follows:

    This has a number of disadvantages:

    • If there is no index, the query could take a long time if the employee_salary_table is large.

    • Adding and maintaining indexes adds overhead, and even with indexes on dept and salary, each subquery execution adds overhead by performing a lookup through the index.

    Let's try achieve the same with window functions. First, generate a rank for all employees, using the function.

    Each department has a separate sequence of ranks due to the PARTITION BY clause. This particular sequence of values for rank() is given by the ORDER BY clause inside the window function’s OVER clause. Finally, to get our results in a readable format we order the data by dept and the newly generated ranking column.

    Now, we need to reduce the results to find only the top 5 per department. Here is a common mistake:

    Trying to filter only the first 5 values per department by putting a where clause in the statement does not work, due to the way window functions are computed. The computation of window functions happens after all WHERE, GROUP BY and HAVING clauses have been completed, right before ORDER BY, so the WHERE clause has no idea that the ranking column exists. It is only present after we have filtered and grouped all the rows.

    To counteract this problem, we need to wrap our query into a derived table. We can then attach a where clause to it:

    See Also

    This page is licensed: CC BY-SA / Gnu FDL

    function (expression) OVER (
      [ PARTITION BY expression_list ]
      [ ORDER BY order_list [ frame_clause ] ] ) 
    
    function:
      A valid window function
    
    expression_list:
      expression | column_name [, expr_list ]
    
    order_list:
      expression | column_name [ ASC | DESC ] 
      [, ... ]
    
    frame_clause:
      {ROWS | RANGE} {frame_border | BETWEEN frame_border AND frame_border}
    
    frame_border:
      | UNBOUNDED PRECEDING
      | UNBOUNDED FOLLOWING
      | CURRENT ROW
      | expr PRECEDING
      | expr FOLLOWING
    LAST_VALUE
  • LEAD

  • MEDIAN

  • NTH_VALUE

  • NTILE

  • PERCENT_RANK

  • PERCENTILE_CONT

  • PERCENTILE_DISC

  • RANK, ROW_NUMBER

  • COUNT
  • MAX

  • MIN

  • STD

  • STDDEV

  • STDDEV_POP

  • STDDEV_SAMP

  • SUM

  • VAR_POP

  • VAR_SAMP

  • VARIANCE

  • Does not support GROUPS-type frames (it seems that no popular database supports it, either)
  • Does not support frame exclusion (no other database seems to support it, either) (MDEV-9724)

  • Does not support explicit NULLS FIRST or NULLS LAST.

  • Does not support nested navigation in window functions (this is VALUE_OF(expr AT row_marker [, default_value) syntax)

  • The following window functions are supported:

    • "Streamable" window functions: ROW_NUMBER, RANK, DENSE_RANK,

    • Window functions that can be streamed once the number of rows in partition is known: PERCENT_RANK, CUME_DIST, NTILE

  • Aggregate functions that are currently supported as window functions are: COUNT, SUM, AVG, BIT_OR, BIT_AND, BIT_XOR.

  • Aggregate functions with the DISTINCT specifier (e.g. COUNT( DISTINCT x)) are not supported as window functions.

  • CUME_DIST
    DENSE_RANK
    FIRST_VALUE
    LAG
    Aggregate functions
    AVG
    BIT_AND
    BIT_OR
    BIT_XOR
    Window Frames
    MDEV-9727
    MDEV-6115
    bb-10.2-mdev9543
    ROW_NUMBER
    RANK
    Window Frames
    Introduction to Window Functions in MariaDB Server 10.2
    CREATE TABLE student (name CHAR(10), test CHAR(10), score TINYINT); 
    
    INSERT INTO student VALUES 
      ('Chun', 'SQL', 75), ('Chun', 'Tuning', 73), 
      ('Esben', 'SQL', 43), ('Esben', 'Tuning', 31), 
      ('Kaolin', 'SQL', 56), ('Kaolin', 'Tuning', 88), 
      ('Tatiana', 'SQL', 87), ('Tatiana', 'Tuning', 83);
    SELECT name, test, score, AVG(score) OVER (PARTITION BY test) 
      AS average_by_test FROM student;
    +---------+--------+-------+-----------------+
    | name    | test   | score | average_by_test |
    +---------+--------+-------+-----------------+
    | Chun    | SQL    |    75 |         65.2500 |
    | Chun    | Tuning |    73 |         68.7500 |
    | Esben   | SQL    |    43 |         65.2500 |
    | Esben   | Tuning |    31 |         68.7500 |
    | Kaolin  | SQL    |    56 |         65.2500 |
    | Kaolin  | Tuning |    88 |         68.7500 |
    | Tatiana | SQL    |    87 |         65.2500 |
    | Tatiana | Tuning |    83 |         68.7500 |
    +---------+--------+-------+-----------------+
    
    SELECT name, test, score, AVG(score) OVER (PARTITION BY name) 
      AS average_by_name FROM student;
    +---------+--------+-------+-----------------+
    | name    | test   | score | average_by_name |
    +---------+--------+-------+-----------------+
    | Chun    | SQL    |    75 |         74.0000 |
    | Chun    | Tuning |    73 |         74.0000 |
    | Esben   | SQL    |    43 |         37.0000 |
    | Esben   | Tuning |    31 |         37.0000 |
    | Kaolin  | SQL    |    56 |         72.0000 |
    | Kaolin  | Tuning |    88 |         72.0000 |
    | Tatiana | SQL    |    87 |         85.0000 |
    | Tatiana | Tuning |    83 |         85.0000 |
    +---------+--------+-------+-----------------+
    CREATE TABLE users (
      email VARCHAR(30), 
      first_name VARCHAR(30), 
      last_name VARCHAR(30), 
      account_type VARCHAR(30)
    );
    
    INSERT INTO users VALUES 
      ('admin@boss.org', 'Admin', 'Boss', 'admin'), 
      ('bob.carlsen@foo.bar', 'Bob', 'Carlsen', 'regular'),
      ('eddie.stevens@data.org', 'Eddie', 'Stevens', 'regular'),
      ('john.smith@xyz.org', 'John', 'Smith', 'regular'), 
      ('root@boss.org', 'Root', 'Chief', 'admin')
    SELECT row_number() OVER (ORDER BY email) AS rnum,
        email, first_name, last_name, account_type
    FROM users ORDER BY email;
    +------+------------------------+------------+-----------+--------------+
    | rnum | email                  | first_name | last_name | account_type |
    +------+------------------------+------------+-----------+--------------+
    |    1 | admin@boss.org         | Admin      | Boss      | admin        |
    |    2 | bob.carlsen@foo.bar    | Bob        | Carlsen   | regular      |
    |    3 | eddie.stevens@data.org | Eddie      | Stevens   | regular      |
    |    4 | john.smith@xyz.org     | John       | Smith     | regular      |
    |    5 | root@boss.org          | Root       | Chief     | admin        |
    +------+------------------------+------------+-----------+--------------
    SELECT row_number() OVER (PARTITION BY account_type ORDER BY email) AS rnum, 
      email, first_name, last_name, account_type 
    FROM users ORDER BY account_type,email;
    +------+------------------------+------------+-----------+--------------+
    | rnum | email                  | first_name | last_name | account_type |
    +------+------------------------+------------+-----------+--------------+
    |    1 | admin@boss.org         | Admin      | Boss      | admin        |
    |    2 | root@boss.org          | Root       | Chief     | admin        |
    |    1 | bob.carlsen@foo.bar    | Bob        | Carlsen   | regular      |
    |    2 | eddie.stevens@data.org | Eddie      | Stevens   | regular      |
    |    3 | john.smith@xyz.org     | John       | Smith     | regular      |
    +------+------------------------+------------+-----------+--------------+
    CREATE TABLE employee_salaries (dept VARCHAR(20), name VARCHAR(20), salary INT(11));
    
    INSERT INTO employee_salaries VALUES
    ('Engineering', 'Dharma', 3500),
    ('Engineering', 'Binh', 3000),
    ('Engineering', 'Adalynn', 2800),
    ('Engineering', 'Samuel', 2500),
    ('Engineering', 'Cveta', 2200),
    ('Engineering', 'Ebele', 1800),
    ('Sales', 'Carbry', 500),
    ('Sales', 'Clytemnestra', 400),
    ('Sales', 'Juraj', 300),
    ('Sales', 'Kalpana', 300),
    ('Sales', 'Svantepolk', 250),
    ('Sales', 'Angelo', 200);
    select dept, name, salary
    from employee_salaries as t1
    where (select count(t2.salary)
           from employee_salaries as t2
           where t1.name != t2.name and
                 t1.dept = t2.dept and
                 t2.salary > t1.salary) < 5
    order by dept, salary desc;
    
    +-------------+--------------+--------+
    | dept        | name         | salary |
    +-------------+--------------+--------+
    | Engineering | Dharma       |   3500 |
    | Engineering | Binh         |   3000 |
    | Engineering | Adalynn      |   2800 |
    | Engineering | Samuel       |   2500 |
    | Engineering | Cveta        |   2200 |
    | Sales       | Carbry       |    500 |
    | Sales       | Clytemnestra |    400 |
    | Sales       | Juraj        |    300 |
    | Sales       | Kalpana      |    300 |
    | Sales       | Svantepolk   |    250 |
    +-------------+--------------+--------+
    SELECT rank() OVER (PARTITION BY dept ORDER BY salary DESC) AS ranking,
        dept, name, salary
        FROM employee_salaries
        ORDER BY dept, ranking;
    +---------+-------------+--------------+--------+
    | ranking | dept        | name         | salary |
    +---------+-------------+--------------+--------+
    |       1 | Engineering | Dharma       |   3500 |
    |       2 | Engineering | Binh         |   3000 |
    |       3 | Engineering | Adalynn      |   2800 |
    |       4 | Engineering | Samuel       |   2500 |
    |       5 | Engineering | Cveta        |   2200 |
    |       6 | Engineering | Ebele        |   1800 |
    |       1 | Sales       | Carbry       |    500 |
    |       2 | Sales       | Clytemnestra |    400 |
    |       3 | Sales       | Juraj        |    300 |
    |       3 | Sales       | Kalpana      |    300 |
    |       5 | Sales       | Svantepolk   |    250 |
    |       6 | Sales       | Angelo       |    200 |
    +---------+-------------+--------------+--------+
    select
    rank() over (partition by dept order by salary desc) as ranking,
    dept, name, salary
    from employee_salaries
    where ranking <= 5
    order by dept, ranking;
    
    ERROR 1054 (42S22): Unknown column 'ranking' in 'where clause'
    select *from (select rank() over (partition by dept order by salary desc) as ranking,
      dept, name, salary
    from employee_salaries) as salary_ranks
    where (salary_ranks.ranking <= 5)
      order by dept, ranking;
    +---------+-------------+--------------+--------+
    | ranking | dept        | name         | salary |
    +---------+-------------+--------------+--------+
    |       1 | Engineering | Dharma       |   3500 |
    |       2 | Engineering | Binh         |   3000 |
    |       3 | Engineering | Adalynn      |   2800 |
    |       4 | Engineering | Samuel       |   2500 |
    |       5 | Engineering | Cveta        |   2200 |
    |       1 | Sales       | Carbry       |    500 |
    |       2 | Sales       | Clytemnestra |    400 |
    |       3 | Sales       | Juraj        |    300 |
    |       3 | Sales       | Kalpana      |    300 |
    |       5 | Sales       | Svantepolk   |    250 |
    +---------+-------------+--------------+--------+

    ColumnStore Window Functions

    Review window function support in ColumnStore. This page details the specific window functions and limitations applicable when using the ColumnStore storage engine.

    Introduction

    MariaDB ColumnStore provides support for window functions broadly following the SQL 2003 specification. A window function allows for calculations relating to a window of data surrounding the current row in a result set. This capability provides for simplified queries in support of common business questions such as cumulative totals, rolling averages, and top 10 lists.

    Aggregate functions are utilized for window functions however differ in behavior from a group by query because the rows remain ungrouped. This provides support for cumulative sums and rolling averages, for example.

    Two key concepts for window functions are Partition and Frame:

    • A Partition is a group of rows, or window, that have the same value for a specific column, for example a Partition can be created over a time period such as a quarter or lookup values.

    • The Frame for each row is a subset of the row's Partition. The frame typically is dynamic allowing for a sliding frame of rows within the Partition. The Frame determines the range of rows for the windowing function. A Frame could be defined as the last X rows and next Y rows all the way up to the entire Partition.

    Window functions are applied after joins, group by, and having clauses are calculated.

    Syntax

    A window function is applied in the select clause using the following syntax:

    where window_definition is defined as:

    PARTITION BY:

    • Divides the window result set into groups based on one or more expressions.

    • An expression may be a constant, column, and non window function expressions.

    • A query is not limited to a single partition by clause. Different partition clauses can be used across different window function applications.

    • The partition by columns do not need to be in the select list but do need to be available from the query result set.

    ORDER BY:

    • Defines the ordering of values within the partition.

    • Can be ordered by multiple keys which may be a constant, column or non window function expression.

    • The order by columns do not need to be in the select list but need to be available from the query result set.

    • Use of a select column alias from the query is not supported.

    and the optional frame_clause is defined as:

    and the optional frame_start and frame_end are defined as (value being a numeric expression):

    RANGE/ROWS:

    • Defines the windowing clause for calculating the set of rows that the function applies to for calculating a given rows window function result.

    • Requires an ORDER BY clause to define the row order for the window.

    • ROWS specify the window in physical units, i.e. result set rows and must be a constant or expression evaluating to a positive numeric value.

    Supported Functions

    Function
    Description

    Examples

    Example Schema

    The examples are all based on the following simplified sales opportunity table:

    Some example values are (thanks to for sample data generation):

    id
    accountName
    name
    owner
    amount
    closeDate
    stageName

    The schema, sample data, and queries are available as an attachment to this article.

    Cumulative Sum and Running Max Example

    Window functions can be used to achieve cumulative / running calculations on a detail report. In this case a won opportunity report for a 7 day period adds columns to show the accumulated won amount as well as the current highest opportunity amount in preceding rows.

    with example results:

    owner
    accountName
    CloseDate
    amount
    cumeWon
    runningMax

    Partitioned Cumulative Sum and Running Max Example

    The above example can be partitioned, so that the window functions are over a particular field grouping such as owner and accumulate within that grouping. This is achieved by adding the syntax "partition by" in the window function clause.

    With example results:

    owner
    accountName
    CloseDate
    amount
    cumeWon
    runningMax

    Ranking / Top Results

    The rank window function allows for ranking or assigning a numeric order value based on the window function definition. Using the Rank() function will result in the same value for ties / equal values and the next rank value skipped. The Dense_Rank() function behaves similarly except the next consecutive number is used after a tie rather than skipped. The Row_Number() function will provide a unique ordering value. The example query shows the Rank() function being applied to rank sales reps by the number of opportunities for Q4 2016.

    with example results (note the query is technically incorrect by using closeDate < '2016-12-31' however this creates a tie scenario for illustrative purposes):

    owner
    wonCount
    rank

    If the dense_rank function is used the rank values would be 1,2,3,3,4 and for the row_number function the values would be 1,2,3,4,5.

    First and Last Values

    The first_value and last_value functions allow determining the first and last values of a given range. Combined with a group by this allows summarizing opening and closing values. The example shows a more complex case where detailed information is presented for first and last opportunity by quarter.

    with example results:

    year
    quarter
    firstAccountName
    firstOwner
    firstAmount
    lastAccountName
    lastOwner
    lastAmount

    Prior and Next Example

    Sometimes it useful to understand the previous and next values in the context of a given row. The lag and lead window functions provide this capability. By default the offset is one providing the prior or next value but can also be provided to get a larger offset. The example query is a report of opportunities by account name showing the opportunity amount, and the prior and next opportunity amount for that account by close date.

    With example results:

    accountName
    closeDate
    currentOppAmount
    priorAmount
    nextAmount

    Quartiles Example

    The NTile window function allows for breaking up a data set into portions assigned a numeric value to each portion of the range. NTile(4) breaks the data up into quartiles (4 sets). The example query produces a report of all opportunities summarizing the quartile boundaries of amount values.

    With example results:

    quartile
    min
    max

    Percentile Example

    The percentile functions have a slightly different syntax from other window functions as can be seen in the example below. These functions can be only applied against numeric values. The argument to the function is the percentile to evaluate. Following 'within group' is the sort expression which indicates the sort column and optionally order. Finally after 'over' is an optional partition by clause, for no partition clause use 'over ()'. The example below utilizes the value 0.5 to calculate the median opportunity amount in the rows. The values differ sometimes because percentile_cont will return the average of the 2 middle rows for an even data set while percentile_desc returns the first encountered in the sort.

    With example results:

    owner
    accountName
    CloseDate
    amount
    pct_cont
    pct_disc

    This page is licensed: CC BY-SA / Gnu FDL

    If there is no PARTITION BY clause, all rows of the result set define the group.

    ASC (default) and DESC options allow for ordering ascending or descending.

  • NULLS FIRST and NULL_LAST options specify whether null values come first or last in the ordering sequence. NULLS_FIRST is the default for ASC order, and NULLS_LAST is the default for DESC order.

  • RANGE specifies the window as a logical offset. If the expression evaluates to a numeric value then the ORDER BY expression must be a numeric or DATE type. If the expression evaluates to an interval value then the ORDER BY expression must be a DATE data type.
  • UNBOUNDED PRECEDING indicates the window starts at the first row of the partition.

  • UNBOUNDED FOLLOWING indicates the window ends at the last row of the partition.

  • CURRENT ROW specifies the window start or ends at the current row or value.

  • If omitted, the default is ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW.

  • LAST_VALUE()

    The value evaluated at the row that is the last row of the window frame (counting from 1); null if no such row.

    LEAD()

    Provides access to a row at a given physical offset beyond that position. Returns value evaluated at the row that is offset rows after the current row within the partition; if there is no such row, instead return default. Both offset and default are evaluated with respect to the current row. If omitted, offset defaults to 1 and default to null.

    MAX()

    Maximum value of expression across all input values.

    MEDIAN()

    An inverse distribution function that assumes a continuous distribution model. It takes a numeric or datetime value and returns the middle value or an interpolated value that would be the middle value once the values are sorted. Nulls are ignored in the calculation.

    MIN()

    Minimum value of expression across all input values.

    NTH_VALUE()

    The value evaluated at the row that is the nth row of the window frame (counting from 1); null if no such row.

    NTILE()

    Divides an ordered data set into a number of buckets indicated by expr and assigns the appropriate bucket number to each row. The buckets are numbered 1 through expr. The expr value must resolve to a positive constant for each partition. Integer ranging from 1 to the argument value, dividing the partition as equally as possible.

    PERCENT_RANK()

    relative rank of the current row: (rank - 1) / (total rows - 1).

    PERCENTILE_CONT()

    An inverse distribution function that assumes a continuous distribution model. It takes a percentile value and a sort specification, and returns an interpolated value that would fall into that percentile value with respect to the sort specification. Nulls are ignored in the calculation.

    PERCENTILE_DISC()

    An inverse distribution function that assumes a discrete distribution model. It takes a percentile value and a sort specification and returns an element from the set. Nulls are ignored in the calculation.

    RANK()

    rank of the current row with gaps; same as row_number of its first peer.

    ROW_NUMBER()

    number of the current row within its partition, counting from 1

    STDDEV() STDDEV_POP()

    Computes the population standard deviation and returns the square root of the population variance.

    STDDEV_SAMP()

    Computes the cumulative sample standard deviation and returns the square root of the sample variance.

    SUM()

    Sum of expression across all input values.

    VARIANCE() VAR_POP()

    Population variance of the input values (square of the population standard deviation).

    VAR_SAMP()

    Sample variance of the input values (square of the sample standard deviation).

    Organic demand-driven benchmark

    Maria

    477878.41

    2016-11-28

    ClosedWon

    3

    Miboo

    De-engineered hybrid groupware

    Olivier

    80181.78

    2017-01-05

    ClosedWon

    4

    Youbridge

    Enterprise-wide bottom-line Graphic Interface

    Chris

    946245.29

    2016-07-02

    ClosedWon

    5

    Skyba

    Reverse-engineered fresh-thinking standardization

    Maria

    696241.82

    2017-02-17

    Negotiating

    6

    Eayo

    Fundamental well-modulated artificial intelligence

    Bob

    765605.52

    2016-08-27

    Prospecting

    7

    Yotz

    Extended secondary infrastructure

    Chris

    319624.20

    2017-01-06

    ClosedLost

    8

    Oloo

    Configurable web-enabled data-warehouse

    Chris

    321016.26

    2017-03-08

    ClosedLost

    9

    Kaymbo

    Multi-lateral web-enabled definition

    Bob

    690881.01

    2017-01-02

    Developing

    10

    Rhyloo

    Public-key coherent infrastructure

    Chris

    965477.74

    2016-11-07

    Prospecting

    583722.98

    437636.47

    Olivier

    Devpulse

    2016-10-05

    834235.93

    1417958.91

    834235.93

    Chris

    Linkbridge

    2016-10-07

    539977.45

    2458738.65

    834235.93

    Olivier

    Trupe

    2016-10-07

    500802.29

    1918761.20

    834235.93

    Bill

    Latz

    2016-10-08

    857254.87

    3315993.52

    857254.87

    Chris

    Avamm

    2016-10-09

    699566.86

    4015560.38

    857254.87

    583722.98

    437636.47

    Bill

    Latz

    2016-10-08

    857254.87

    1440977.85

    857254.87

    Chris

    Linkbridge

    2016-10-07

    539977.45

    539977.45

    539977.45

    Chris

    Avamm

    2016-10-09

    699566.86

    1239544.31

    699566.86

    Olivier

    Devpulse

    2016-10-05

    834235.93

    834235.93

    834235.93

    Olivier

    Trupe

    2016-10-07

    500802.29

    1335038.22

    834235.93

    Olivier

    10

    5

    2016

    4

    Skimia

    Chris

    961513.59

    Avamm

    Maria

    112493.65

    2017

    1

    Yombu

    Bob

    536875.51

    Skaboo

    Chris

    270273.08

    2016-12-18

    350235.75

    161086.82

    878595.89

    Abata

    2016-12-31

    878595.89

    350235.75

    922322.39

    Abata

    2017-01-21

    922322.39

    878595.89

    NULL

    Abatz

    2016-10-19

    795424.15

    NULL

    NULL

    Agimba

    2016-07-09

    288974.84

    NULL

    914461.49

    Agimba

    2016-09-07

    914461.49

    288974.84

    176645.52

    Agimba

    2016-09-20

    176645.52

    914461.49

    NULL

    437636.4700000000

    437636.47

    Bill

    Latz

    2016-10-08

    857254.87

    437636.4700000000

    437636.47

    Chris

    Linkbridge

    2016-10-07

    539977.45

    619772.1550000000

    539977.45

    Chris

    Avamm

    2016-10-09

    699566.86

    619772.1550000000

    539977.45

    Olivier

    Devpulse

    2016-10-05

    834235.93

    667519.1100000000

    500802.29

    Olivier

    Trupe

    2016-10-07

    500802.29

    667519.1100000000

    500802.29

    AVG()

    The average of all input values.

    COUNT()

    Number of input rows.

    CUME_DIST()

    Calculates the cumulative distribution, or relative rank, of the current row to other rows in the same partition. Number of peer or preceding rows / number of rows in partition.

    DENSE_RANK()

    Ranks items in a group leaving no gaps in ranking sequence when there are ties.

    FIRST_VALUE()

    The value evaluated at the row that is the first row of the window frame (counting from 1); null if no such row.

    LAG()

    The value evaluated at the row that is offset rows before the current row within the partition; if there is no such row, instead return default. Both offset and default are evaluated with respect to the current row. If omitted, offset defaults to 1 and default to null. LAG provides access to more than one row of a table at the same time without a self-join. Given a series of rows returned from a query and a position of the cursor, LAG provides access to a row at a given physical offset prior to that position.

    1

    Browseblab

    Multi-lateral executive function

    Bob

    26444.86

    2016-10-20

    Negotiating

    2

    Bill

    Babbleopia

    2016-10-02

    437636.47

    437636.47

    437636.47

    Bill

    Thoughtworks

    2016-10-04

    Bill

    Babbleopia

    2016-10-02

    437636.47

    437636.47

    437636.47

    Bill

    Thoughtworks

    2016-10-04

    Bill

    19

    1

    Chris

    15

    2

    Maria

    14

    3

    Bob

    14

    3

    2016

    3

    Skidoo

    Bill

    523295.07

    Skipstorm

    Bill

    Abata

    2016-09-10

    645098.45

    NULL

    161086.82

    Abata

    2016-10-14

    161086.82

    645098.45

    350235.75

    1

    6337.15

    287634.01

    2

    288796.14

    539977.45

    3

    540070.04

    748727.51

    4

    753670.77

    998864.47

    Bill

    Babbleopia

    2016-10-02

    437636.47

    437636.4700000000

    437636.47

    Bill

    Thoughtworks

    2016-10-04

    www.mockaroo.com

    Mita

    146086.51

    146086.51

    151420.86

    Abata

    146086.51

    function_name ([expression [, expression ... ]]) OVER ( window_definition )
    [ PARTITION BY expression [, ...] ]
    [ ORDER BY expression [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [, ...] ]
    [ frame_clause ]
    { RANGE | ROWS } frame_start
    { RANGE | ROWS } BETWEEN frame_start AND frame_end
    UNBOUNDED PRECEDING
    value PRECEDING
    CURRENT ROW
    value FOLLOWING
    UNBOUNDED FOLLOWING
    CREATE TABLE opportunities (
    id INT,
    accountName VARCHAR(20),
    name VARCHAR(128),
    owner VARCHAR(7),
    amount DECIMAL(10,2),
    closeDate DATE,
    stageName VARCHAR(11)
    ) ENGINE=columnstore;
    SELECT owner, 
    accountName, 
    CloseDate, 
    amount, 
    SUM(amount) OVER (ORDER BY CloseDate ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) cumeWon, 
    MAX(amount) OVER (ORDER BY CloseDate ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) runningMax
    FROM opportunities 
    WHERE stageName='ClosedWon' 
    AND closeDate >= '2016-10-02' AND closeDate <= '2016-10-09' 
    ORDER BY CloseDate;
    SELECT owner,  
    accountName,  
    CloseDate,  
    amount,  
    SUM(amount) OVER (PARTITION BY owner ORDER BY CloseDate ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) cumeWon,  
    MAX(amount) OVER (PARTITION BY owner ORDER BY CloseDate ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) runningMax 
    FROM opportunities  
    WHERE stageName='ClosedWon' 
    AND closeDate >= '2016-10-02' AND closeDate <= '2016-10-09'  
    ORDER BY owner, CloseDate;
    SELECT owner, 
    wonCount, 
    rank() OVER (ORDER BY wonCount DESC) rank 
    FROM (
      SELECT owner, 
      COUNT(*) wonCount 
      FROM opportunities 
      WHERE stageName='ClosedWon' 
      AND closeDate >= '2016-10-01' AND closeDate < '2016-12-31'  
      GROUP BY owner
    ) t
    ORDER BY rank;
    SELECT a.YEAR, 
    a.quarter, 
    f.accountName firstAccountName, 
    f.owner firstOwner, 
    f.amount firstAmount, 
    l.accountName lastAccountName, 
    l.owner lastOwner, 
    l.amount lastAmount 
    FROM (
      SELECT YEAR, 
      QUARTER, 
      MIN(firstId) firstId, 
      MIN(lastId) lastId 
      FROM (
        SELECT YEAR(closeDate) YEAR, 
        quarter(closeDate) QUARTER, 
        first_value(id) OVER (PARTITION BY YEAR(closeDate), quarter(closeDate) ORDER BY closeDate ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) firstId, 
        last_value(id) OVER (PARTITION BY YEAR(closeDate), quarter(closeDate) ORDER BY closeDate ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) lastId 
        FROM opportunities  WHERE stageName='ClosedWon'
      ) t 
      GROUP BY YEAR, QUARTER ORDER BY YEAR,QUARTER
    ) a 
    JOIN opportunities f ON a.firstId = f.id 
    JOIN opportunities l ON a.lastId = l.id 
    ORDER BY YEAR, QUARTER;
    SELECT accountName, 
    closeDate,  
    amount currentOppAmount, 
    lag(amount) OVER (PARTITION BY accountName ORDER BY closeDate) priorAmount, lead(amount) OVER (PARTITION BY accountName ORDER BY closeDate) nextAmount 
    FROM opportunities 
    ORDER BY accountName, closeDate 
    LIMIT 9;
    SELECT t.quartile, 
    MIN(t.amount) MIN, 
    MAX(t.amount) MAX 
    FROM (
      SELECT amount, 
      ntile(4) OVER (ORDER BY amount ASC) quartile 
      FROM opportunities 
      WHERE closeDate >= '2016-10-01' AND closeDate <= '2016-12-31'
      ) t 
    GROUP BY quartile 
    ORDER BY quartile;
    SELECT owner,  
    accountName,  
    CloseDate,  
    amount,
    percentile_cont(0.5) within GROUP (ORDER BY amount) OVER (PARTITION BY owner) pct_cont,
    percentile_disc(0.5) within GROUP (ORDER BY amount) OVER (PARTITION BY owner) pct_disc
    FROM opportunities  
    WHERE stageName='ClosedWon' 
    AND closeDate >= '2016-10-02' AND closeDate <= '2016-10-09'  
    ORDER BY owner, CloseDate;