Overview

MariaDB starting with 10.0.16

The JSON table type was introduced in MariaDB 10.0.16.

JSON (JavaScript Object Notation) is a lightweight data-interchange format widely used on the Internet. Many applications, generally written in JavaScript or PHP use and produce JSON data, which are exchanged as files of different physical formats.

It is also possible to query, create or update such information in a database-like manner. MongoDB does it using a JavaScript-like language. PostgreSQL includes these facilities by using a specific data type and related functions like dynamic columns.

The CONNECT engine adds this facility to MariaDB by supporting tables based on JSON data files. This is done like for XML tables by creating tables describing what should be retrieved from the file and how it should be processed.

Let us start from the file “biblio3.json” that is the JSON equivalent of the XML Xsample file we have described in the XML table chapter:

[
  {
    "ISBN": "9782212090819",
    "LANG": "fr",
    "SUBJECT": "applications",
    "AUTHOR": [
      {
        "FIRSTNAME": "Jean-Christophe",
        "LASTNAME": "Bernadac"
      },
      {
        "FIRSTNAME": "François",
        "LASTNAME": "Knab"
      }
    ],
    "TITLE": "Construire une application XML",
    "PUBLISHER": {
      "NAME": "Eyrolles",
      "PLACE": "Paris"
    },
    "DATEPUB": 1999
  },
  {
    "ISBN": "9782840825685",
    "LANG": "fr",
    "SUBJECT": "applications",
    "AUTHOR": [
      {
        "FIRSTNAME": "William J.",
        "LASTNAME": "Pardi"
      }
    ],
    "TITLE": "XML en Action",
    "TRANSLATED": {
       "PREFIX": "adapté de l'anglais par",
       "TRANSLATOR": {
          "FIRSTNAME": "James",
        "LASTNAME": "Guerin"
        }
    },
    "PUBLISHER": {
      "NAME": "Microsoft Press",
      "PLACE": "Paris"
    },
    "DATEPUB": 1999
  }
]

This file contains the different items existing in JSON.

  • Arrays: They are enclosed in square brackets and contain a list of comma separated values.
  • Objects: They are enclosed in curly brackets. They contain a comma separated list of pairs, each pair composed of a key name between double quotes, followed by a ‘:’ character and followed by a value.
  • Values: Values can be an array or an object. They also can be a string between double quotes, an integer or float number, a Boolean value or a null value. The simplest way for CONNECT to locate a table in such a file is by an array containing a list of objects. Each array value will be a table row and each pair of the row objects will represent a column, the key being the column name and the value the column value.

A first try to create a table on this file will be to take the outer array as the table:

create table jsample (
ISBN char(15),
LANG char(2),
SUBJECT char(32),
AUTHOR char(128),
TITLE char(32),
TRANSLATED char(80),
PUBLISHER char(20),
DATEPUB int(4))
engine=CONNECT table_type=JSON
File_name='biblio3.json';

If we execute the query:

select isbn, author, title, publisher from jsample;

We get the result:

isbnauthortitlepublisher
9782212090819Jean-Christophe BernadacConstruire une application XMLEyrolles Paris
9782840825685William J. PardiXML en ActionMicrosoft Press Pari

Note that by default, column values that are objects have been set to the concatenation of all the string values of the object separated by a blank. When a column value is an array, only the first item of the array is retrieved.

However, things are generally more complicated. If JSON files do not contain attributes (although object pairs are similar to attributes) they contain a new item, arrays. We have seen that they can be used like XML multiple nodes, here to specify several authors, but they are more general because they can contain objects of different types, even it may not be advisable to do so.

This is why CONNECT enables the specification of a column field_format option “JPATH” that is used to describe exactly where the items to display are and how to handles arrays.

Here is an example of a new table that can be created on the same file, allowing choosing the column names, to get some sub-objects and to specify how to handle the author array.

create table jsampall (
ISBN char(15),
Language char(2) field_format='LANG',
Subject char(32) field_format='SUBJECT',
Author char(128) field_format='AUTHOR:[" and "]',
Title char(32) field_format='TITLE',
Translation char(32) field_format='TRANSLATOR:PREFIX',
Translator char(80) field_format='TRANSLATOR',
Publisher char(20) field_format='PUBLISHER:NAME',
Location char(16) field_format='PUBLISHER:PLACE',
Year int(4) field_format='DATEPUB')
engine=CONNECT table_type=JSON File_name='biblio3.json';

Given the query:

select title, author, publisher, location from jsampall;

The result is:

titleauthorpublisherlocation
Construire une application XMLJean-Christophe Bernadac and François KnabEyrollesParis
XML en ActionWilliam J. PardiMicrosoft PressParis

Here is another example showing that one can choose what to extract from the file and how to “expand” an array, meaning to generate one row for each array value:

create table jsampex (
ISBN char(15),
Title char(32) field_format='TITLE',
AuthorFN char(128) field_format='AUTHOR:[X]:FIRSTNAME',
AuthorLN char(128) field_format='AUTHOR:[X]:LASTNAME',
Year int(4) field_format='DATEPUB')
engine=CONNECT table_type=JSON File_name='biblio3.json';

It is displayed as:

ISBNTitleAuthorFNAuthorLNYear
9782212090819Construire une application XMLJean-ChristopheBernadac1999
9782212090819Construire une application XMLFrançoisKnab1999
9782840825685XML en ActionWilliam J.Pardi1999

The Jpath Specification

This is the description of the path to follow to reach the required item. Each step is the key name (case sensitive) of the pair when crossing an object, and the number of the value between square brackets when crossing an array. Each specification is separated by a ‘:’ character.

For instance, in the above file, the last name of the second author of a book is reached by:

AUTHOR:[1]:LASTNAME

The array specification can also indicate how it must be processed:

SpecificationArray TypeLimitDescription
[n][1]AllN.ATake the nth value of the array.
[X] or [x]AllExpand. Generate one row for each array value.
["string"]StringConcatenate all values separated by the specified string.
[+]NumericMake the sum of all the array values.
[*]NumericMake the product of all array values.
[!]NumericMake the average of all the array values.
[>] or [<]AllReturn the greatest or least value of the array.
[#]AllN.AReturn the number of values in the array.
[]AllExpand if under an expanded object. Otherwise sum if numeric, else concatenation separated by “, “.
AllN.AIf an array, expand it if under an expanded object or take the first value of it.

Note 1: When the LIMIT restriction is applicable, only the first m array items are used, m being the value of the LIMIT option (to be specified in option_list). The LIMIT default value is 10.

Note 2: An alternative way to indicate what is to be expanded is to use the expand option in the option list, for instance:

OPTION_LIST='Expand=AUTHOR'

AUTHOR is here the key of the pair that has the array as a value (case sensitive). Expand is limited to only one branch (expanded arrays must be under the same object).

Let us take as an example the file expense.json (found here). The table jexpall expands all under and including the week array:

create table jexpall (
WHO char(12),
WEEK int(2) field_format='WEEK:[x]:NUMBER',
WHAT char(32) field_format='WEEK:[x]:EXPENSE:[x]:WHAT',
AMOUNT double(8,2) field_format='WEEK:[x]:EXPENSE:[x]:AMOUNT')
engine=CONNECT table_type=JSON File_name='expense.json';
WHOWEEKWHATAMOUNT
Joe3Beer18.00
Joe3Food12.00
Joe3Food19.00
Joe3Car20.00
Joe4Beer19.00
Joe4Beer16.00
Joe4Food17.00
Joe4Food17.00
Joe4Beer14.00
Joe5Beer14.00
Joe5Food12.00
Beth3Beer16.00
Beth4Food17.00
Beth4Beer15.00
Beth5Food12.00
Beth5Beer20.00
Janet3Car19.00
Janet3Food18.00
Janet3Beer18.00
Janet4Car17.00
Janet5Beer14.00
Janet5Car12.00
Janet5Beer19.00
Janet5Food12.00

The table jexpw shows what was bought and the sum and average of amounts for each person and week:

create table jexpw (
WHO char(12) not null,
WEEK int(2) not null field_format='WEEK:[x]:NUMBER',
WHAT char(32) not null field_format='WEEK::EXPENSE:[", "]:WHAT',
SUM double(8,2) not null field_format='WEEK::EXPENSE:[+]:AMOUNT',
AVERAGE double(8,2) not null field_format='WEEK::EXPENSE:[!]:AMOUNT')
engine=CONNECT table_type=JSON File_name='expense.json';
WHOWEEKWHATSUMAVERAGE
Joe3Beer, Food, Food, Car69.0017.25
Joe4Beer, Beer, Food, Food, Beer83.0016.60
Joe5Beer, Food26.0013.00
Beth3Beer16.0016.00
Beth4Food, Beer32.0016.00
Beth5Food, Beer32.0016.00
Janet3Car, Food, Beer55.0018.33
Janet4Car17.0017.00
Janet5Beer, Car, Beer, Food57.0014.25

Let us see what the table jexpz does:

create table jexpz (
WHO char(12) not null,
WEEKS char(12) not null field_format='WEEK:[", "]:NUMBER',
SUMS char(64) not null field_format='WEEK:["+"]:EXPENSE:[+]:AMOUNT',
SUM double(8,2) not null field_format='WEEK:[+]:EXPENSE:[+]:AMOUNT',
AVGS char(64) not null field_format='WEEK:["+"]:EXPENSE:[!]:AMOUNT',
SUMAVG double(8,2) not null field_format='WEEK:[+]:EXPENSE:[!]:AMOUNT',
AVGSUM double(8,2) not null field_format='WEEK:[!]:EXPENSE:[+]:AMOUNT',
AVERAGE double(8,2) not null field_format='WEEK:[!]:EXPENSE:[x]:AMOUNT')
engine=CONNECT table_type=JSON
File_name='E:/Data/Json/expense2.json';
WHOWEEKSSUMSSUMAVGSSUMAVGAVGSUMAVERAGE
Joe3, 4, 569.00+83.00+26.00178.0017.25+16.60+13.0046.8559.3316.18
Beth3, 4, 516.00+32.00+32.0080.0016.00+16.00+16.0048.0026.6716.00
Janet3, 4, 555.00+17.00+57.00129.0018.33+17.00+14.2549.5843.0016.12

For all persons:

  • Column 1 show the person name.
  • Column 2 shows the weeks for which values are calculated.
  • Column 3 lists the sums of expenses for each week.
  • Column 4 calculates the sum of all expenses by person.
  • Column 5 shows the week’s expense averages.
  • Column 6 calculates the sum of these averages.
  • Column 7 calculates the average of the week’s sum of expenses.
  • Column 8 calculates the average expense by person.

It would be very difficult, if even possible, to obtain this result from table jexpall using an SQL query.

Having Columns defined by Discovery

It is possible to let the MariaDB discovery process do the job of column specification. When columns are not defined in the create table statement, CONNECT endeavors to analyze the JSON file and to provide the column specifications. This is possible only for tables represented by an array of objects because CONNECT retrieves the column names from the object pair keys and their definition from the object pair values. For instance, the jsample table could be created saying:

create table jsample engine=connect table_type=JSON file_name='biblio3.json';

Let’s check how it was actually specified using the show create table statement:

CREATE TABLE `jsample` (
  `ISBN` char(13) NOT NULL,
  `LANG` char(2) NOT NULL,
  `SUBJECT` char(12) NOT NULL,
  `AUTHOR` varchar(256) DEFAULT NULL,
  `TITLE` char(30) NOT NULL,
  `TRANSLATED` varchar(256) DEFAULT NULL,
  `PUBLISHER` varchar(256) DEFAULT NULL,
  `DATEPUB` int(4) NOT NULL
) ENGINE=CONNECT DEFAULT CHARSET=latin1 `TABLE_TYPE`='JSON' `FILE_NAME`='biblio3.json';

It is equivalent except for the column sizes that have been calculated from the file as the maximum length of the corresponding column when it was a normal value. For columns that are json arrays or objects, the column is specified as a varchar string of length 256, supposedly big enough to contain the sub-object's concatenated values. Nullable is set to true if the column is null or missing in some rows or if its JPATH contains arrays.

If a more complex definition is desired, you can ask CONNECT to analyze the JPATH up to a given level using the level option in the option list. The level value is the number of sub-objects that are taken in the JPATH. For instance:

create table jsampall2 engine=connect table_type=JSON 
  file_name='biblio3.json' option_list='level=1';

This will define the table as:

CREATE TABLE `jsampall2` (
  `ISBN` char(13) NOT NULL,
  `LANG` char(2) NOT NULL,
  `SUBJECT` char(12) NOT NULL,
  `AUTHOR_FIRSTNAME` char(15) NOT NULL `FIELD_FORMAT`='AUTHOR::FIRSTNAME',
  `AUTHOR_LASTNAME` char(8) NOT NULL `FIELD_FORMAT`='AUTHOR::LASTNAME',
  `TITLE` char(30) NOT NULL,
  `TRANSLATED_PREFIX` char(23) DEFAULT NULL `FIELD_FORMAT`='TRANSLATED:PREFIX',
  `TRANSLATED_TRANSLATOR` varchar(256) DEFAULT NULL `FIELD_FORMAT`='TRANSLATED:TRANSLATOR',
  `PUBLISHER_NAME` char(15) NOT NULL `FIELD_FORMAT`='PUBLISHER:NAME',
  `PUBLISHER_PLACE` char(5) NOT NULL `FIELD_FORMAT`='PUBLISHER:PLACE',
  `DATEPUB` int(4) NOT NULL
) ENGINE=CONNECT DEFAULT CHARSET=latin1 `TABLE_TYPE`='JSON' `
  FILE_NAME`='biblio3.json' `OPTION_LIST`='level=1';

The problem is that CONNECT cannot guess what you want to do with arrays. Here the AUTHOR array is left undefined, which means that only its first value will be retrieved unless you also had specified “Expand=AUTHOR” in the option list. But of course, you can replace it by anything else.

This method can be used as a quick way to make a “template” table definition that can later be edited to make the desired definition. In particular, column names are constructed from all the object keys of their path in order to have distinct column names. This can be manually edited to have the desired names, provided their JPATH key names are not modified.

JSON Catalogue Tables

Another way to see JSON table column specifications is to use a catalogue table. For instance:

create table bibcol engine=connect table_type=JSON file_name='biblio3.json' 
  option_list='level=2' catfunc=columns;
select column_name, type_name type, column_size size, jpath from bibcol;

which returns:

column_nametypesizejpath
ISBNCHAR13
LANGCHAR2
SUBJECTCHAR12
AUTHOR_FIRSTNAMECHAR15AUTHOR::FIRSTNAME
AUTHOR_LASTNAMECHAR8AUTHOR::LASTNAME
TITLECHAR30
TRANSLATED_PREFIXCHAR23TRANSLATED:PREFIX
TRANSLATED_TRANSLATOR_FIRSTNAMECHAR5TRANSLATED:TRANSLATOR:FIRSTNAME
TRANSLATED_TRANSLATOR_LASTNAMECHAR6TRANSLATED:TRANSLATOR:LASTNAME
PUBLISHER_NAMECHAR15PUBLISHER:NAME
PUBLISHER_PLACECHAR5PUBLISHER:PLACE
DATEPUBINTEGER4

All this is mostly useful when creating a table on a remote file that you cannot easily see.

Finding the table within a JSON file

Given the file “facebook.json”:

{
   "data": [
      {
         "id": "X999_Y999",
         "from": {
            "name": "Tom Brady", "id": "X12"
         },
         "message": "Looking forward to 2010!",
         "actions": [
            {
               "name": "Comment",
               "link": "http://www.facebook.com/X999/posts/Y999"
            },
            {
               "name": "Like",
               "link": "http://www.facebook.com/X999/posts/Y999"
            }
         ],
         "type": "status",
         "created_time": "2010-08-02T21:27:44+0000",
         "updated_time": "2010-08-02T21:27:44+0000"
      },
      {
         "id": "X998_Y998",
         "from": {
            "name": "Peyton Manning", "id": "X18"
         },
         "message": "Where's my contract?",
         "actions": [
            {
               "name": "Comment",
               "link": "http://www.facebook.com/X998/posts/Y998"
            },
            {
               "name": "Like",
               "link": "http://www.facebook.com/X998/posts/Y998"
            }
         ],
         "type": "status",
         "created_time": "2010-08-02T21:27:44+0000",
         "updated_time": "2010-08-02T21:27:44+0000"
      }
   ]
}

The table we want to analyze is represented by the array value of the “data” object. Here is how this is specified in the create table statement:

create table jfacebook (
`ID` char(10) field_format='id',
`Name` char(32) field_format='from:name',
`MyID` char(16) field_format='from:id',
`Message` varchar(256) field_format='message',
`Action` char(16) field_format='actions::name',
`Link` varchar(256) field_format='actions::link',
`Type` char(16) field_format='type',
`Created` datetime date_format='YYYY-MM-DD\'T\'hh:mm:ss' field_format='created_time',
`Updated` datetime date_format='YYYY-MM-DD\'T\'hh:mm:ss' field_format='updated_time')
engine=connect table_type=JSON file_name='facebook.json' option_list='Object=data,Expand=actions';

This is the object option that gives the Jpath of the table. Note also an alternate way to declare the array to be expanded by the expand option of the option_list.

Because some string values contain a date representation, the corresponding columns are declared as datetime and the date format is specified for them.

The Jpath of the object option has the same syntax than the column Jpath but of course all array step must be specified using the [n] format.

Note: All this applies only to tables having pretty = 2 (see below).

JSON File Formats

The examples we have seen so far are files that, even they can be formatted in different ways (blanks, tabs, carriage return and line feed are ignored when parsing them), respect the JSON syntax and are made of only one item (Object or Array). Like for XML files, they are entirely parsed and a memory representation is made used to process them. This implies that they are of reasonable size to avoid an out of memory condition. Tables based on such files are recognized by the option Pretty=2 that we did not specify above because this is the default.

An alternate format, which is the format of exported MongoDB files, is a file where each row is physically stored in one file record. For instance:

{ "_id" : "01001", "city" : "AGAWAM", "loc" : [ -72.622739, 42.070206 ], "pop" : 15338, "state" : "MA" }
{ "_id" : "01002", "city" : "CUSHMAN", "loc" : [ -72.51564999999999, 42.377017 ], "pop" : 36963, "state" : "MA" }
{ "_id" : "01005", "city" : "BARRE", "loc" : [ -72.1083540000001, 42.409698 ], "pop" : 4546, "state" : "MA" }
{ "_id" : "01007", "city" : "BELCHERTOWN", "loc" : [ -72.4109530000001, 42.275103 ], "pop" : 10579, "state" : "MA" }
…
{ "_id" : "99929", "city" : "WRANGELL", "loc" : [ -132.352918, 56.433524 ], "pop" : 2573, "state" : "AK" }
{ "_id" : "99950", "city" : "KETCHIKAN", "loc" : [ -133.18479, 55.942471 ], "pop" : 422, "state" : "AK" }

The original file, “cities.json”, has 29352 records. To base a table on this file we must specify the option Pretty=0 in the option list. For instance:

create table cities (
`_id` char(5) key,
`city` char(32),
`long` double(12,6) field_format='loc:[0]',
`lat` double(12,6) field_format='loc:[1]',
`pop` int(8),
`state` char(2) distrib='clustered')
engine=CONNECT table_type=JSON file_name='cities.json' lrecl=128 option_list='pretty=0';

Note the use of [n] array specifications for the longitude and latitude columns.

When using this format, the table is processed by CONNECT like a DOS, CSV or FMT table. Rows are retrieved and parsed by records and the table can be very large. Another advantage is that such a table can be indexed, which can be of great value for very large tables. The “distrib” option of the “state” column tells CONNECT to use block indexing when possible.

For such tables – as well as for pretty=1 ones – the record size must be specified using the LRECL option. Be sure you don’t specify it too small as it is used to allocate the read/write buffers and the memory used for parsing the rows. If in doubt, be generous as it does not cost much in memory allocation.

Another format exists, noted by Pretty=1, which is similar to this one but has some additions to represent a JSON array. A header and a trailer records are added containing the opening and closing square bracket, and all records but the last are followed by a comma. It has the same advantages for reading and updating, but inserting and deleting are executed in the pretty=2 way.

Alternate Table Arrangement

We have seen that the most natural way to represent a table in a JSON file is to make it on an array of objects. However, other possibilities exist. A table can be an array of arrays, a one column table can be an array of values, or a one row table can be just one object or one value. One row tables are internally handled by adding a one value array around them.

Let us see how to handle, for instance, a table that is an array of arrays. The file:

[
  [56, "Coucou", 500.00],
  [[2,0,1,4], "Hello World", 2.0316],
  ["1784", "John Doo", 32.4500],
  [1914, ["Nabucho","donosor"], 5.12],
  [7, "sept", [0.77,1.22,2.01]],
  [8, "huit", 13.0],
]

A table can be created on this file as:

create table xjson (
`a` int(6) field_format='[1]',
`b` char(32) field_format='[2]',
`c` double(10,4) field_format='[3]')
engine=connect table_type=JSON file_name='test.json'
option_list='Pretty=1,Jmode=1,Base=1' lrecl=128;

Columns are specified by their position in the row arrays. By default, this is zero-based but for this table the base was set to 1 by the Base option of the option list. Another new option in the option list is Jmode=1. It indicates what type of table this is. The Jmode values are:

  1. An array of objects. This is the default.
  2. An array of Array. Like this one.
  3. An array of values.

When reading, this is not required as the type of the array items is specified for the columns; however, it is required when inserting new rows so CONNECT knows what to insert. For instance:

insert into xjson values(25, 'Breakfast', 1.414);

After this, it is displayed as:

abc
56Coucou500.0000
2Hello World2.0316
1784John Doo32.4500
1914Nabucho5.1200
7sept0.7700
8huit13.0000
25Breakfast1.4140

Unspecified array values are represented by their first element.

Getting and Setting JSON Representation of a Column

It is possible to retrieve and display a column contain as the full JSON string corresponding to it in the JSON file. This is specified in the JPATH by a “*” where the object or array would be specified. For instance:

create table jsample2 (
ISBN char(15),
Lng char(2) field_format='LANG',
json_Author char(255) field_format='AUTHOR:*',
Title char(32) field_format='TITLE',
Year int(4) field_format='DATEPUB')
engine=CONNECT table_type=JSON file_name='biblio3.json';

Now the query:

select * from jsample2 where isbn = '9782840825685';

will return and display :

ISBNLngjson_AuthorTitleYear
9782840825685fr[{"FIRSTNAME":"William J.","LASTNAME":"Pardi"}]XML en Action1999

This also works on input, a column specified so that it can be directly set to a valid JSON string.

This feature is of great value as we will see below.

Write Operations on JSON Tables

The SQL commands INSERT, UPDATE and DELETE are fully supported for JSON tables. For INSERT and UPDATE, if the target values are simple values, there are no problems.

However, there are some issues when the added or modified values are objects or arrays.

Concerning objects, the same problems exist that we have already seen with the XML type. The added or modified object will have the format described in the table definition, which can be different from the one of the JSON file. Modifications should be done using a file specifying the full path of modified objects.

New problems are raised when trying to modify the values of an array. Only updates can be done on the original table. First of all, for the values of the array to be distinct values, all update operations concerning array values must be done using a table expanding this array.

For instance, to modify the authors of the biblio.json based table, the jsampex table must be used. Doing so, updating and deleting authors is possible using standard SQL commands. For example to change the first name of Knab from François to John:

update jsampex set authorfn = 'John' where authorln = 'Knab';

However It would be wrong to do:

update jsampex set authorfn = 'John' where isbn = '9782212090819';

Because this would change the first name of both authors as they share the same ISBN.

Where things become more difficult is when trying to delete or insert an author of a book. Indeed, a delete command will delete the whole book and an insert command will add a new complete row instead of adding a new author in the same array. Here we are penalized by the SQL language that cannot give us a way to specify this. Something like:

update jsampex add authorfn = 'Charles', authorln = 'Dickens'
where title = 'XML en Action';

However this does not exist in SQL. Does this mean that it is impossible to do it? No, but it requires us to use a table specified on the same file but adapted to this task. One way to do it is to specify a table for which the authors are no more an expanded array. Supposing we want to add an author to the “XML en Action” book. We will do it on a table containing just the author(s) of that book, which is the second book of the table.

create table jauthor (
FIRSTNAME char(64),
LASTNAME char(64))
engine=CONNECT table_type=JSON File_name='biblio3.json' option_list='Object=[2]:AUTHOR';

The command:

select * from jauthor;

replies:

FIRSTNAMELASTNAME
William J.Pardi

It is a standard JSON table that is an array of objects in which we can freely insert or delete rows.

insert into jauthor values('Charles','Dickens');

We can check that this was done correctly by:

select * from jsampex;

This will display:

ISBNTitleAuthorFNAuthorLNYear
9782212090819Construire une application XMLJean-ChristopheBernadac1999
9782212090819Construire une application XMLJohnKnab1999
9782840825685XML en ActionWilliam J.Pardi1999
9782840825685XML en ActionCharlesDickens1999

Note: If this table were a big table with many books, it would be difficult to know what the order of a specific book is in the table. This can be found by adding a special ROWID column in the table.

However, an alternate way to do it is by using direct JSON column representation as in the JSAMPLE2 table. This can be done by:

update jsample2 set json_Author =
'[{"FIRSTNAME":"William J.","LASTNAME":"Pardi"},
  {"FIRSTNAME":"Charles","LASTNAME":"Dickens"}]'
where isbn = '9782840825685';

Here, we didn't have to find the index of the sub array to modify. However, this is not quite satisfying because we had to manually write the whole JSON value to set to the json_Author column.

This is why we need specific functions to do so. They are introduced now.

JSON User Defined Functions

Although such functions written by other parties do exist,[2] CONNECT provides its own UDF’s that are specifically adapted to the JSON table type and easily available because, being inside the CONNECT library or DLL, they require no additional module to be loaded. In some cases, it is possible that these other functions can alternatively be used even though this was not tested.

Here is the list of the CONNECT functions; more can be added if required.

NameTypeReturnDescriptionAdded
jbin_arrayFunctionSTRING*Make a JSON array containing its arguments.MariaDB 10.0.23, MariaDB 10.1.9
jbin_array_addFunctionSTRING*Adds to its first array argument its second arguments.MariaDB 10.0.23, MariaDB 10.1.9
jbin_array_add_valuesFunctionSTRING*Adds to its first array argument all following arguments.MariaDB 10.0.23
jbin_array_deleteFunctionSTRING*Deletes the nth element of its first array argument.MariaDB 10.0.23, MariaDB 10.1.9
jbin_fileFunctionSTRING*Returns of a (json) file contain.MariaDB 10.0.23, MariaDB 10.1.9
jbin_get_itemFunctionSTRING*Access and returns a json item by a JPATH key.MariaDB 10.0.23, MariaDB 10.1.9
jbin_insert_itemFunctionSTRINGInsert item values located to paths.MariaDB 10.0.23
jbin_item_mergeFunctionSTRING*Merges two arrays or two objects.MariaDB 10.0.23, MariaDB 10.1.9
jbin_objectFunctionSTRING*Make a JSON object containing its arguments.MariaDB 10.0.23, MariaDB 10.1.9
jbin_object_nonullFunctionSTRING*Make a JSON object containing its not null arguments.MariaDB 10.0.23, MariaDB 10.1.9
jbin_object_addFunctionSTRING*Adds to its first object argument its second argument.MariaDB 10.0.23, MariaDB 10.1.9
jbin_object_deleteFunctionSTRING*Deletes the nth element of its first object argument.MariaDB 10.0.23, MariaDB 10.1.9
jbin_object_keyFunctionSTRING*Make a JSON object for key/value pairs.MariaDB 10.0.23
jbin_object_listFunctionSTRING*Returns the list of object keys as an array.MariaDB 10.0.23, MariaDB 10.1.9
jbin_set_itemFunctionSTRINGSet item values located to paths.MariaDB 10.0.23
jbin_update_itemFunctionSTRINGUpdate item values located to paths.MariaDB 10.0.23
jfile_makeFunctionSTRINGMake a json file from its json item first argument.MariaDB 10.0.23, MariaDB 10.1.9
json_arrayFunctionSTRINGMake a JSON array containing its arguments.MariaDB 10.0.17
json_array_addFunctionSTRINGAdds to its first array argument its second arguments (before MariaDB 10.0.23, MariaDB 10.1.9, all following arguments).MariaDB 10.0.17
json_array_add_valuesFunctionSTRINGAdds to its first array argument all following arguments.MariaDB 10.0.23, MariaDB 10.1.9
json_array_deleteFunctionSTRINGDeletes the nth element of its first array argument.MariaDB 10.0.18
json_array_grpAggregateSTRINGMakes JSON arrays from coming argument.MariaDB 10.0.17
json_fileFunctionSTRINGReturns the contains of (json) file.MariaDB 10.0.23, MariaDB 10.1.9
json_get_itemFunctionSTRINGAccess and returns a json item by a JPATH key.MariaDB 10.0.23, MariaDB 10.1.9
json_insert_itemFunctionSTRINGInsert item values located to paths.MariaDB 10.0.23
json_item_mergeFunctionSTRINGMerges two arrays or two objects.MariaDB 10.0.23, MariaDB 10.1.9
json_locate_allFunctionSTRINGReturns the JPATH’s of all occurrences of an element.MariaDB 10.0.23, MariaDB 10.1.9
json_objectFunctionSTRINGMake a JSON object containing its arguments.MariaDB 10.0.17
json_object_deleteFunctionSTRINGDeletes the nth element of its first object argument.MariaDB 10.0.23, MariaDB 10.1.9
json_object_grpAggregateSTRINGMakes JSON objects from coming arguments.MariaDB 10.0.17
json_object_listFunctionSTRINGReturns the list of object keys as an array.MariaDB 10.0.23, MariaDB 10.1.9
json_object_nonullFunctionSTRINGMake a JSON object containing its not null arguments.MariaDB 10.0.18
json_serializeFunctionSTRINGSerializes the return of a “Jbin” function.MariaDB 10.0.23, MariaDB 10.1.9
json_set_itemFunctionSTRINGSet item values located to paths.MariaDB 10.0.23
json_update_itemFunctionSTRINGUpdate item values located to paths.MariaDB 10.0.123
json_valueFunctionSTRINGMake a JSON value from its unique argument.MariaDB 10.0.17
jsoncontainsFunctionINTEGERReturns 0 or 1 if an element is contained in the document.MariaDB 10.0.23
jsoncontains_pathFunctionINTEGERReturns 0 or 1 if a JPATH is contained in the document.MariaDB 10.0.23
jsonget_stringFunctionSTRINGAccess and returns a string element by a JPATH key.MariaDB 10.0.23, MariaDB 10.1.9
jsonget_intFunctionINTEGERAccess and returns an integer element by a JPATH key.MariaDB 10.0.23, MariaDB 10.1.9
jsonget_realFunctionREALAccess and returns a real element by a JPATH key.MariaDB 10.0.23, MariaDB 10.1.9
jsonlocateFunctionSTRINGReturns the JPATH to access one element.MariaDB 10.0.23, MariaDB 10.1.9

String values are mapped to JSON strings. These strings are automatically escaped to conform to the JSON syntax. The automatic escaping is bypassed when the value has an alias beginning with ‘json_’. This is automatically the case when a JSON UDF argument is another JSON UDF whose name begins with “json_” (not case sensitive). This is why all functions that do not return a Json item are not prefixed by “json_”.

Numeric values are (big) integers, double floating point values or decimal values. Decimal values are character strings containing a numeric representation and are treated as strings. Floating point values contain a decimal point and/or an exponent. Integers are written without decimal points.

To install these functions execute the following commands :[3]

Note: Json function names are often written on this page with leading upper case letters for clarity. It is possible to do so in SQL queries because function names are case insensitive. However, when creating or dropping them, their names must match the case they are in the library module (lower case from MariaDB 10.0.23, MariaDB 10.1.9).

On Unix, before MariaDB MariaDB 10.0.23, 10.1.9:

create function Json_Array returns string soname 'ha_connect.so';
create function Json_Array_Add returns string soname 'ha_connect.so';
create function Json_Array_Delete returns string soname 'ha_connect.so';
create function Json_Object returns string soname 'ha_connect.so';
create function Json_Object_Nonull returns string soname 'ha_connect.so';
create function Json_Value returns string soname 'ha_connect.so';
create aggregate function Json_Array_Grp returns string soname 'ha_connect.so';
create aggregate function Json_Object_Grp returns string soname 'ha_connect.so';

On Unix, from MariaDB 10.0.23:

create function jsonvalue returns string soname 'ha_connect.so';
create function json_array returns string soname 'ha_connect.so';
create function json_array_add_values returns string soname 'ha_connect.so';
create function json_array_add returns string soname 'ha_connect.so';
create function json_array_delete returns string soname 'ha_connect.so';
create function json_object returns string soname 'ha_connect.so';
create function json_object_nonull returns string soname 'ha_connect.so';
create function json_object_key returns string soname 'ha_connect.so';
create function json_object_add returns string soname 'ha_connect.so';
create function json_object_delete returns string soname 'ha_connect.so';
create function json_object_list returns string soname 'ha_connect.so';
create function jsonvalue returns string soname 'ha_connect.so';
create aggregate function json_array_grp returns string soname 'ha_connect.so';
create aggregate function json_object_grp returns string soname 'ha_connect.so';
create function json_array_add_values returns string soname 'ha_connect.so';
create function json_object_add returns string soname 'ha_connect.so';
create function json_object_delete returns string soname 'ha_connect.so';
create function json_object_list returns string soname 'ha_connect.so';
create function jsonlocate returns string soname 'ha_connect.so';
create function json_locate_all returns string soname 'ha_connect.so';
create function jsoncontains returns integer soname 'ha_connect.so';
create function jsoncontains_path returns integer soname 'ha_connect.so';
create function json_item_merge returns real soname 'ha_connect.so';
create function json_get_item returns string soname 'ha_connect.so';
create function jsonget_string returns string soname 'ha_connect.so';
create function jsonget_int returns integer soname 'ha_connect.so';
create function jsonget_real returns real soname 'ha_connect.so';
create function json_set_item returns string soname 'ha_connect.so';
create function json_insert_item returns string soname 'ha_connect.so';
create function json_update_item returns string soname 'ha_connect.so';
create function json_file returns string soname 'ha_connect.so';
create function jfile_make returns string soname 'ha_connect.so';
create function json_serialize returns string soname 'ha_connect.so';
create function jbin_array returns string soname 'ha_connect.so';
create function jbin_array_add_values returns string soname 'ha_connect.so';
create function jbin_array_add returns string soname 'ha_connect.so';
create function jbin_array_delete returns string soname 'ha_connect.so';
create function jbin_object returns string soname 'ha_connect.so';
create function jbin_object_nonull returns string soname 'ha_connect.so';
create function jbin_object_key returns string soname 'ha_connect.so';
create function jbin_object_add returns string soname 'ha_connect.so';
create function jbin_object_delete returns string soname 'ha_connect.so';
create function jbin_object_list returns string soname 'ha_connect.so';
create function jbin_item_merge returns string soname 'ha_connect.so';
create function jbin_get_item returns string soname 'ha_connect.so';
create function jbin_set_item returns real soname 'ha_connect.so';
create function jbin_insert_item returns real soname 'ha_connect.so';
create function jbin_update_item returns real soname 'ha_connect.so';
create function jbin_file returns string soname 'ha_connect.so';

On WIndows (from MariaDB 10.0.23):

create function jsonvalue returns string soname 'ha_connect';
create function json_array returns string soname 'ha_connect';
create function json_array_add_values returns string soname 'ha_connect';
create function json_array_add returns string soname 'ha_connect';
create function json_array_delete returns string soname 'ha_connect';
create function json_object returns string soname 'ha_connect';
create function json_object_nonull returns string soname 'ha_connect';
create function json_object_key returns string soname 'ha_connect';
create function json_object_add returns string soname 'ha_connect';
create function json_object_delete returns string soname 'ha_connect';
create function json_object_list returns string soname 'ha_connect';
create function jsonvalue returns string soname 'ha_connect';
create aggregate function json_array_grp returns string soname 'ha_connect';
create aggregate function json_object_grp returns string soname 'ha_connect';
create function json_array_add_values returns string soname 'ha_connect';
create function json_object_add returns string soname 'ha_connect';
create function json_object_delete returns string soname 'ha_connect';
create function json_object_list returns string soname 'ha_connect';
create function jsonlocate returns string soname 'ha_connect';
create function json_locate_all returns string soname 'ha_connect';
create function jsoncontains returns integer soname 'ha_connect';
create function jsoncontains_path returns integer soname 'ha_connect';
create function json_item_merge returns real soname 'ha_connect';
create function json_get_item returns real soname 'ha_connect';
create function jsonget_string returns string soname 'ha_connect';
create function jsonget_int returns integer soname 'ha_connect';
create function jsonget_real returns real soname 'ha_connect';
create function json_set_item returns string soname 'ha_connect';
create function json_insert_item returns string soname 'ha_connect';
create function json_update_item returns string soname 'ha_connect';
create function json_file returns string soname 'ha_connect';
create function jfile_make returns string soname 'ha_connect';
create function json_serialize returns string soname 'ha_connect';
create function jbin_array returns string soname 'ha_connect';
create function jbin_array_add_values returns string soname 'ha_connect';
create function jbin_array_add returns string soname 'ha_connect';
create function jbin_array_delete returns string soname 'ha_connect';
create function jbin_object returns string soname 'ha_connect';
create function jbin_object_nonull returns string soname 'ha_connect';
create function jbin_object_key returns string soname 'ha_connect';
create function jbin_object_add returns string soname 'ha_connect';
create function jbin_object_delete returns string soname 'ha_connect';
create function jbin_object_list returns string soname 'ha_connect';
create function jbin_item_merge returns string soname 'ha_connect';
create function jbin_get_item returns string soname 'ha_connect';
create function jbin_set_item returns real soname 'ha_connect';
create function jbin_insert_item returns real soname 'ha_connect';
create function jbin_update_item returns real soname 'ha_connect';
create function jbin_file returns string soname 'ha_connect';

Json_Value

Json_Value (val)

Returns a JSON value as a string, for instance:

select Json_Value(3.1416);
Json_Value(3.1416)
3.141600

Json_Array

Json_Array(val1, …, valn)

This function returns a string denoting a JSON array with all its arguments as members. For example:

select Json_Array(56, 3.1416, 'My name is "Foo"', NULL);
Json_Array(56, 3.1416, 'My name is "Foo"',N ULL)
[56,3.141600,"My name is \"Foo\"",null]

Note: The argument list can be void. If so a void array is returned.

Json_Object

Json_Object(arg1, …, argn)

Return a string denoting a JSON object. For instance:

select Json_Object(56, 3.1416, 'machin', NULL);

The object is filled with pairs corresponding to the given arguments. The key of each pair is made by default from the argument value.

Json_Object(56, 3.1416, 'machin', NULL)
{"56":56,"3.1416":3.141600,"machin":"machin","NULL":null}

When needed, it is possible to specify the keys by giving an alias to the arguments:

select Json_Object(56 qty, 3.1416 price, 'machin' truc, NULL garanty);
Json_Object(56 qty,3.1416 price,'machin' truc, NULL garanty)
{"qty":56,"price":3.141600,"truc":"machin","garanty":null}

If the alias is prefixed by ‘json_’ (to prevent escaping) the key name is stripped from that prefix.

When entering values retrieved from a table, the key is by default the column name:

select Json_Object(matricule, nom, titre, salaire) from connect.employe where nom = 'PANTIER';
Json_Object(matricule, nom, titre, salaire)
{"matricule":40567,"nom":"PANTIER","titre":"DIRECTEUR","salaire":14000.000000}

Json_Object_Nonull

Json_Object_Nonull(arg1, …, argn)

This function works like Json_Object but “null” arguments are ignored and not inserted in the object. Arguments are regarded as “null” if they are JSON null values, void arrays or objects, or arrays or objects containing only null members.

It is mainly used to avoid constructing useless null items when converting tables (see later).

Json_Array_Add

Json_Array_Add(arg1, arg2, [arg3][, arg4][, ...])

Note: In CONNECT version 1.3 (before MariaDB 10.0.23, MariaDB 10.1.9), this function behaved like the new Json_Array_Add_Values function. The following describes this function for CONNECT version 1.4 (from MariaDB MariaDB 10.0.23, 10.1.9) only. The first argument must be a JSON array. The second argument is added as member of this array. For example:

select Json_Array_Add(Json_Array(56,3.1416,'machin',NULL),
'One more') Array;
Array
[56,3.141600,"machin",null,"One more"]

Note: The first array is not escaped, its (alias) name beginning with ‘json_’.

Now we can see how adding an author to the JSAMPLE2 table can alternatively be done:

update jsample2 set 
  json_author = json_array_add(json_author, json_object('Charles' FIRSTNAME, 'Dickens' LASTNAME)) 
  where isbn = '9782840825685';

Note: Calling a column returning JSON a name prefixed by json_ (like json_author here) is good practice and removes the need to give it an alias to prevent escaping when used as an argument.

Additional arguments: If a third integer argument is given, it specifies the position (zero based) of the added value:

select Json_Array_Add('[5,3,8,7,9]' json_, 4, 2) Array;
Array
[5,3,4,8,7,9]

If a string argument is added, it specifies the Json path to the array to be modified. For instance:

select Json_Array_Add('{"a":1,"b":2,"c":[3,4]}' json_, 5, 1, 'c');
Json_Array_Add('{"a":1,"b":2,"c":[3, 4]}' json_, 5, 1, 'c')
{"a":1,"b":2,"c":[3,5,4]}

Json_Array_Delete

MariaDB starting with 10.0.18

Json_Array_Delete was introduced in MariaDB 10.0.18.

Json_Array_Delete(arg1, arg2 [,arg3] [...])

The first argument should be a JSON array. The second argument is an integer indicating the rank (0 based conforming to general json usage) of the element to delete. For example:

select Json_Array_Delete(Json_Array(56,3.1416,'foo',NULL),1) Array;
Array
[56,"foo",null]

Now we can see how to delete the second author from the JSAMPLE2 table:

update jsample2 set json_author = json_array_delete(json_author, 1) 
  where isbn = '9782840825685';

A Json path can be specified as a third string argument

Json_Array_Grp

Json_Array_Grp(arg)

This is an aggregate function that makes an array filled from values coming from the rows retrieved by a query. Let us suppose we have the pet table:

nameracenumber
Johndog2
Billcat1
Marydog1
Marycat1
Lisbethrabbit2
Kevincat2
Kevinbird6
Donalddog1
Donaldfish3

The query:

select name, json_array_grp(race) from pet group by name;

will return:

namejson_array_grp(race)
Bill["cat"]
Donald["dog","fish"]
John["dog"]
Kevin["cat","bird"]
Lisbeth["rabbit"]
Mary["dog","cat"]

One problem with the JSON aggregate functions is that they construct their result in memory and cannot know the needed amount of storage, not knowing the number of rows of the used table.

This is why the number of values for each group is limited. This limit is the value of the connect_json_grp_size session variable and is equal to 10 by default. Nevertheless, working on a larger table is possible, but only after setting connect_json_grp_size to the ceiling of the number of rows per group for the table. Try not to set it to a very large value to avoid memory exhaustion.

Json_Object_Grp

Json_Object_Grp(arg1,arg2)

This function works like Json_Array_Grp. It makes a JSON object filled with values passed from its first argument. Values passed from the second argument will be the keys associated with the values.

This can be seen with the query:

select name, json_object_grp(number,race) from pet group by name;

This query returns:

namejson_object_grp(number,race)
Bill{"cat":1}
Donald{"dog":1,"fish":3}
John{"dog":2}
Kevin{"cat":2,"bird":6}
Lisbeth{"rabbit":2}
Mary{"dog":1,"cat":1}

Json_Array_Add_Values

MariaDB starting with 10.0.23

Json_Array_Add_Values added in CONNECT 1.4 replaces the function Json_Array_Add of CONNECT version 1.3 (before MariaDB 10.0.23, MariaDB 10.1.9).

Json_Array_Add_Values(arg, arglist)

The first argument must be a JSON array string. Then all other arguments are added as members of this array. For example:

select Json_Array_Add_Values
  (Json_Array(56, 3.1416, 'machin', NULL), 'One more', 'Two more') Array;
Array
[56,3.141600,"machin",null,"One more","Two more"]

Json_Object_Add

MariaDB starting with 10.0.23

Json_Object_Add was added in CONNECT 1.4 (from MariaDB 10.0.23, MariaDB 10.1.9).

Json_Object_Add(arg1, arg2, [arg3] …)

The first argument must be a JSON object. The second argument is added as a pair to this object. For example:

select Json_Object_Add
  ('{"item":"T-shirt","qty":27,"price":24.99}' json_old,'blue' color) newobj;
newobj
{"item":"T-shirt","qty":27,"price":24.990000,"color":"blue"}

Note: If the specified key already exists in the object, its value is replaced by the new one.

The third string argument is a Json path to the target object.

Json_Object_Delete

MariaDB starting with 10.0.23

Json_Object_Delete was added in CONNECT 1.4 (from MariaDB 10.0.23, MariaDB 10.1.9).

Json_Object_Delete(arg1, arg2, [arg3] …):

The first argument must be a JSON object. The second argument is the key of the pair to delete. For example:

select Json_Object_Delete('{"item":"T-shirt","qty":27,"price":24.99}' json_old, 'qty') newobj;
newobj
{"item":"T-shirt","price":24.99}

The third string argument is a Json path to the object to be the target of deletion.

Json_Object_List

MariaDB starting with 10.0.23

Json_Object_List was added in CONNECT 1.4 (from MariaDB 10.0.23, MariaDB 10.1.9).

Json_Object_List(arg1, …):

The first argument must be a JSON object. This function returns an array containing the list of all keys existing in the object. For example:

select Json_Object_List(Json_Object(56 qty,3.1416 price,'machin' truc, NULL garanty))
  "Key List";
Key List
["qty","price","truc","garanty"]

JsonLocate

MariaDB starting with 10.0.23

JsonLocate was added in CONNECT 1.4 (from MariaDB 10.0.23, MariaDB 10.1.9).

JsonLocate(arg1, arg2, [arg3], …):

The first argument must be a JSON tree. The second argument is the item to be located. The item to be located can be a constant or a json item. Constant values must be equal in type and value to be found. This is "shallow equality" – strings, integers and doubles won't match.

This function returns the json path to the located item or null if it is not found. For example:

select JsonLocate('{"AUTHORS":[{"FN":"Jules", "LN":"Verne"}, 
  {"FN":"Jack", "LN":"London"}]}' json_, 'Jack') Path;

This query returns:

Path
AUTHORS:[1]:FN

The path syntax is the same used in JSON CONNECT tables.

By default, the path of the first occurrence of the item is returned. The third parameter can be used to specify the occurrence whose path is to be returned. For instance:

select 
JsonLocate('[45,28,[36,45],89]',45) first,
JsonLocate('[45,28,[36,45],89]',45,2) second,
JsonLocate('[45,28,[36,45],89]',45.0) `wrong type`,
JsonLocate('[45,28,[36,45],89]','[36,45]' json_) json;
firstsecondwrong typejson
[0][2]:[1]<null>[2]

For string items, the comparison is case sensitive by default. However, it is possible to specify a string to be compared case insensitively by giving it an alias beginning by “ci”:

select JsonLocate('{"AUTHORS":[{"FN":"Jules", "LN":"Verne"}, 
  {"FN":"Jack", "LN":"London"}]}' json_, 'VERNE' ci) Path;
Path
AUTHORS:[0]:LN

Json_Locate_All

MariaDB starting with 10.0.23

Json_Locate_All was added in CONNECT 1.4 (from MariaDB 10.0.23, MariaDB 10.1.9).

Json_Locate_All(arg1, arg2, [arg3], …):

The first argument must be a JSON item. The second argument is the item to be located. This function returns the paths to all locations of the item as an array of strings. For example:

select Json_Locate_All('[[45,28],[[36,45],89]]',45);

This query returns:

All paths
["[0]:[0]","[1]:[0]:[1]"]

The returned array can be applied other functions. For instance, to get the number of occurrences of an item in a json tree, you can do:

select JsonGet_Int(Json_Locate_All('[[45,28],[[36,45],89]]',45), '[#]') "Nb of occurs";

The displayed result:

Nb of occurs
2

If specified, the third integer argument set the depth to search in the document. This means the maximum items in the paths (the number of ‘:’ separator characters in them plus one). This value defaults to 10 but can be increased for complex documents or reduced to set the maximum wanted depth of the returned paths.

Json_Item_Merge

MariaDB starting with 10.0.23

Json_Item_Merge was added in CONNECT 1.4 (from MariaDB 10.0.23, MariaDB 10.1.9).

Json_Item_Merge(arg1, arg2, …)

This function merges two arrays or two objects. For arrays, this is done by adding to the first array all the values of the second array. For instance:

select Json_Item_Merge(Json_Array('a','b','c'), Json_Array('d','e','f')) as "Result"; 

The function returns:

Result
["a","b","c","d","e","f"]

For objects, the pairs of the second object are added to the first object if the key does not yet exist in it; otherwise the pair of the first object is set with the value of the matching pair of the second object. For instance:

select Json_Item_Merge(Json_Object(1 "a", 2 "b", 3 "c"), Json_Object(4 "d",5 "b",6 "f")) 
  as "Result"; 

The function returns:

Result
{"a":1,"b":5,"c":3,"d":4,"f":6}

Json_Get_Item

MariaDB starting with 10.1.9

Json_Get_Item was added in CONNECT 1.4 (from MariaDB 10.1.9).

Json_Get_Item(arg1, arg2, …)

This function returns a subset of the json document passed as first argument. The second argument is the json path of the item to be returned and should be one returning a json item (terminated by a ‘*’). If not, the function will try to make it right but this is not foolproof. For instance:

select Json_Get_Item(Json_Object('foo' as "first", Json_Array('a', 33) 
  as "json_second"), 'second') as "item";

The correct path should have been ‘second:*’ but in this simple case the function was able to make it right. The returned item:

item
["a",33]

Note: The array is aliased “json_second” to indicate it is a json item and avoid escaping it. However, the “json_” prefix is skipped when making the object and must not be added to the path.

JsonGet_String / JsonGet_Int / JsonGet_Real

MariaDB starting with 10.1.9

JsonGet_String, JsonGet_Int and JsonGet_Real were added in CONNECT 1.4 (from MariaDB 10.1.9).

JsonGet_String(arg1, arg2, [arg3] …)
JsonGet_Int(arg1, arg2, [arg3] …)
JsonGet_Real(arg1, arg2, [arg3] …)

The first argument should be a JSON item. If it is a string with no alias, it will be converted as a json item. The second argument is the path of the item to be located in the first argument and returned, eventually converted according to the used function. For example:

elect 
JsonGet_String('{"qty":7,"price":29.50,"garanty":null}','price') "String",
JsonGet_Int('{"qty":7,"price":29.50,"garanty":null}','price') "Int",
JsonGet_Real('{"qty":7,"price":29.50,"garanty":null}','price') "Real";

This query returns:

StringIntReal
29.502929.500000000000000

The function JsonGet_Real can be given a third argument to specify the number of decimal digits of the returned value. For instance:

select 
JsonGet_Real('{"qty":7,"price":29.50,"garanty":null}','price',4) "Real";

This query returns:

String
29.50

The given path can specify all operators for arrays except the “expand” [X] operator. For instance:

select 
JsonGet_Int(Json_Array(45,28,36,45,89), '[4]') "Rank",
JsonGet_Int(Json_Array(45,28,36,45,89), '[#]') "Number",
JsonGet_String(Json_Array(45,28,36,45,89), '[","]') "Concat",
JsonGet_Int(Json_Array(45,28,36,45,89), '[+]') "Sum",
JsonGet_Real(Json_Array(45,28,36,45,89), '[!]', 2) "Avg";

The result:

RankNumberConcatSumAvg
89545,28,36,45,8924348.60

Json_File

MariaDB starting with 10.1.9

Json_File was added in CONNECT 1.4 (from MariaDB 10.1.9).

Json_File(arg1, [arg2, [arg3]], …)

The first argument must be a file name. This function returns the text of the file that is supposed to be a json file. If only one argument is specified, the file text is returned without being parsed. Up to two additional arguments can be specified:

A string argument is the path to the sub-item to be returned. An integer argument specifies the pretty format value of the file.

This function is chiefly used to get the json item argument of other json functions from a json file. For instance, supposing the file tb.json is:

{ "_id" : 5, "type" : "food", "ratings" : [ 5, 8, 9 ] }
{ "_id" : 6, "type" : "car", "ratings" : [ 5, 9 ] }

Extracting a value from it can be done with a query such as:

select JsonGet_String(Json_File('tb.json', 0), '[1]:type') "Type";

This query returns:

Type
car

However, we’ll see that, most of the time, it is better to use Jbin_File or to directly specify the file name in queries. In particular this function should not be used for queries that must modify the json item because, even if the modified json is returned, the file itself would be unchanged.

Jfile_Make

MariaDB starting with 10.1.9

Jfile_Make was added in CONNECT 1.4 (from MariaDB 10.1.9).

Jfile_Make(arg1, arg2, [arg3], …)

The first argument must be a json item (if it is just a string, Jfile_Make will try its best to see if it is a json item or an input file name). The following arguments are a string file name and an integer pretty value (defaulting to 2) in any order. This function creates a json file containing the first argument item.

The returned string value is the created file name. If not specified as an argument, the file name can in some cases be retrieved from the first argument; in such cases the file itself is modified.

This function can be used to create or format a json file. For instance, supposing we want to format the file tb.json, this can be done with the query:

select Jfile_Make('tb.json' jfile_, 2);

The tb.json file will be changed to:

[
  {
    "_id": 5,
    "type": "food",
    "ratings": [
      5,
      8,
      9
    ]
  },
  {
    "_id": 6,
    "type": "car",
    "ratings": [
      5,
      9
    ]
  }
]

The “JBIN” return type

Almost all functions returning a json string - whose name begins with Json_ - have a counterpart with a name beginning with Jbin_. This is both for performance (speed and memory) as well as for better control of what the functions should do.

This is due to the way CONNECT UDFs work internally. The Json functions, when receiving json strings as parameters, parse them and construct a binary tree in memory. They work on this tree and before returning; serialize this tree to return a new json string.

If the json document is large, this can take up a large amount of time and storage space. It is all right when one simple json function is called – it must be done anyway – but is a waste of time and memory when json functions are used as parameters to other json functions.

To avoid multiple serializing and parsing, the Jbin functions should be used as parameters to other functions. Indeed, they do not serialize the memory document tree, but return a structure allowing the receiving function to have direct access to the memory tree. This saves the serialize-parse steps otherwise needed to pass the argument and removes the need to reallocate the memory of the binary tree, which by the way is 6 to 7 times the size of the json string. For instance:

select Json_Object(Jbin_Array_Add(Jbin_Array('a','b','c'), 'd') as "Jbin_foo") as "Result";

This query returns:

Result
{"foo":["a","b","c","d"]}

Here the binary json tree allocated by Jbin_Array is completed by Jbin_Array_Add and Json_Object and serialized only once to make the final result string. It would be serialized and parsed two more times if using “Json” functions.

Note that Jbin results are recognized as such because they are aliased beginning with “Jbin_”. This is why in the Json_Object function the alias is specified as “Jbin_foo”.

What happens if it is not recognized as such? These functions are declared as returning a string and to take care of this, the returned structure begins with a zero terminated string. For instance:

select Jbin_Array('a','b','c');

This query replies:

Jbin_Array('a','b','c')
Binary Json array

Note: When testing, the tree returned by a “Jbin” function can be seen using the Json_Serialize function whose unique parameter must be a “Jbin” result. For instance:

select Json_Serialize(Jbin_Array('a','b','c'));

This query returns:

Json_Serialize(Jbin_Array('a','b','c'))
["a","b","c"]

Note: For this simple example, this is equivalent to using the Json_Array function.

Using a file as json UDF first argument

We have seen that many json UDFs can have an additional argument not yet described. This is in the case where the json item argument was referring to a file. Then the additional integer argument is the pretty value of the json file. It matters only when the first argument is just a file name (to make the UDF understand this argument is a file name, it should be aliased with a name beginning with jfile_) or if the function modifies the file, in which case it will be rewritten with this pretty format.

The json item is created by extracting the required part from the file. This can be the whole file but more often only some of it. There are two ways to specify the sub-item of the file to be used:

  1. Specifying it in the Json_File or Jbin_File arguments.
  2. Specifying it in the receiving function (not possible for all functions).

It doesn’t make any difference when the Jbin_File is used but it does with Json_File. For instance:

select Jfile_Make('{"a":1, "b":[44, 55]}' json_, 'test.json');
select Json_Array_Add(Json_File('test.json', 'b'), 66);

The second query returns:

Json_Array_Add(Json_File('test.json', 'b'), 66)
[44,55,66]

It just returns the – modified -- subset returned by the Json_File function, while the query:

select Json_Array_Add(Json_File('test.json'), 66, 'b');

returns what was received from Json_File with the modification made on the subset.

Json_Array_Add(Json_File('test.json'), 66, 'b')
{"a":1,"b":[44,55,66]}

Note that in both case the test.json file is not modified. This is because the Json_File function returns a string representing all or part of the file text but no information about the file name. This is all right to check what would be the effect of the modification to the file.

However, to have the file modified, use the Jbin_File function or directly give the file name. Jbin_File returns a structure containing the file name, a pointer to the file parsed tree and eventually a pointer to the subset when a path is given as a second argument:

select Json_Array_Add(Jbin_File('test.json', 'b'), 66);

This query returns:

Json_Array_Add(Jbin_File('test.json', 'b'), 66)
test.json

This time the file is modified. This can be checked with:

select Json_File('test.json', 3);
Json_File('test.json', 3)
{"a":1,"b":[44,55,66]}

The reason why the first argument is returned by such a query is because of tables such as:

create table tb (
n int key,
jfile_cols char(10) not null);
insert into tb values(1,'test.json');

In this table, the jfile_cols column just contains a file name. If we update it by:

update tb set jfile_cols = select Json_Array_Add(Jbin_File('test.json', 'b'), 66)
where n = 1;

This is the test.json file that must be modified, not the jfile_cols column. This can be checked by:

select JsonGet_String(jfile_cols, '[1]:*') from tb;

JsonGet_String(jfile_cols, '[1]:*')
{"a":1,"b":[44,55,66]}

Note: It was an important facility to name the second column of the table beginning by “jfile_” so the json functions knew it was a file name without obliging to specify an alias in the queries.

Using “Jbin” to control what the query execution does

This is applying in particular when acting on json files. We have seen that a file was not modified when using the Json_File function as an argument to a modifying function because the modifying function just received a copy of the json file. This is not true when using the Jbin_File function that does not serialize the binary document and make it directly accessible. Also, as we have seen earlier, json functions that modify their first file parameter modify the file and return the file name. This is done by directly serializing the internal binary document as a file.

However, the “Jbin” counterpart of these functions does not serialize the binary document and thus does not modify the json file. For example let us compare these two queries:

/* First query */

select Json_Object(Jbin_Object_Add(Jbin_File('bt2.json'), 4 as "d") as "Jbin_bt1")
  as "Result";

/* Second query */

select Json_Object(Json_Object_Add(Jbin_File('bt2.json'), 4 as "d") as "Jfile_bt1")
  as "Result";

Both queries return:

Result
{"bt1":{"a":1,"b":2,"c":3,"d":4}}

In the first query Jbin_Object_Add does not serialize the document (no “Jbin” functions do) and Json_Object just returns a serialized modified tree. Consequently, the file bt2.json is not modified. This query is all right to copy a modified version of the json file without modifying it.

However, in the second query Json_Object_Add does modify the json file and returns the file name. The Json_Object function receives this file name, reads and parses the file, makes an object from it and returns the serialized result. This modification can be done willingly but can be an unwanted side effect of the query.

Therefore, using “Jbin” argument functions, in addition to being faster and using less memory, are also safer when dealing with json files that should not be modified.

Using JSON as Dynamic Columns

The JSON nosql language has all the features to be used as an alternative to dynamic columns. For instance, take the following example of dynamic columns:

create table assets (
   item_name varchar(32) primary key, /* A common attribute for all items */
   dynamic_cols  blob  /* Dynamic columns will be stored here */
 );

INSERT INTO assets VALUES
   ('MariaDB T-shirt', COLUMN_CREATE('color', 'blue', 'size', 'XL'));

INSERT INTO assets VALUES
   ('Thinkpad Laptop', COLUMN_CREATE('color', 'black', 'price', 500));

SELECT item_name, COLUMN_GET(dynamic_cols, 'color' as char) AS color FROM assets;
+-----------------+-------+
| item_name       | color |
+-----------------+-------+
| MariaDB T-shirt | blue  |
| Thinkpad Laptop | black |
+-----------------+-------+

/* Remove a column: */

UPDATE assets SET dynamic_cols=COLUMN_DELETE(dynamic_cols, "price")
  WHERE COLUMN_GET(dynamic_cols, 'color' as char)='black';

/* Add a column: */

UPDATE assets SET dynamic_cols=COLUMN_ADD(dynamic_cols, 'warranty', '3 years')
   WHERE item_name='Thinkpad Laptop';

/* You can also list all columns, or (starting from MariaDB 10.0.1) get them together with their values in JSON format: */

SELECT item_name, column_list(dynamic_cols) FROM assets;
+-----------------+---------------------------+
| item_name       | column_list(dynamic_cols) |
+-----------------+---------------------------+
| MariaDB T-shirt | `size`,`color`            |
| Thinkpad Laptop | `color`,`warranty`        |
+-----------------+---------------------------+

SELECT item_name, COLUMN_JSON(dynamic_cols) FROM assets;
+-----------------+----------------------------------------+
| item_name       | COLUMN_JSON(dynamic_cols)              |
+-----------------+----------------------------------------+
| MariaDB T-shirt | {"size":"XL","color":"blue"}           |
| Thinkpad Laptop | {"color":"black","warranty":"3 years"} |
+-----------------+----------------------------------------+

The same result can be obtained with json columns using the json UDF’s:

/* JSON equivalent */

create table jassets (
   item_name varchar(32) primary key, /* A common attribute for all items */
   json_cols varchar(512)  /* Jason columns will be stored here */
 );

INSERT INTO jassets VALUES
   ('MariaDB T-shirt', Json_Object('blue' color, 'XL' size));

INSERT INTO jassets VALUES
   ('Thinkpad Laptop', Json_Object('black' color, 500 price));

SELECT item_name, JsonGet_String(json_cols, 'color') AS color FROM jassets;
+-----------------+-------+
| item_name       | color |
+-----------------+-------+
| MariaDB T-shirt | blue  |
| Thinkpad Laptop | black |
+-----------------+-------+

/* Remove a column: */

UPDATE jassets SET json_cols=Json_Object_Delete(json_cols, 'price')
 WHERE JsonGet_String(json_cols, 'color')='black';

/* Add a column */

UPDATE jassets SET json_cols=Json_Object_Add(json_cols, '3 years' warranty)
 WHERE item_name='Thinkpad Laptop';

/* You can also list all columns, or get them together with their values in JSON format: */

SELECT item_name, Json_Object_List(json_cols) FROM jassets;
+-----------------+-----------------------------+
| item_name       | Json_Object_List(json_cols) |
+-----------------+-----------------------------+
| MariaDB T-shirt | ["color","size"]            |
| Thinkpad Laptop | ["color","warranty"]        |
+-----------------+-----------------------------+

SELECT item_name, json_cols FROM jassets;
+-----------------+----------------------------------------+
| item_name       | json_cols                              |
+-----------------+----------------------------------------+
| MariaDB T-shirt | {"color":"blue","size":"XL"}           |
| Thinkpad Laptop | {"color":"black","warranty":"3 years"} |
+-----------------+----------------------------------------+

However, using JSON brings features not existing in dynamic columns:

  • Use of a language used by many implementation and developers.
  • Full support of arrays, currently missing from dynamic columns.
  • Access of subpart of json by JPATH that can include calculations on arrays.
  • Possible references to json files.

With more experience, additional UDFs can be easily written to support new needs.

Converting Tables to JSON

The JSON UDF’s and the direct Jpath “*” facility are powerful tools to convert table and files to the JSON format. For instance, the file biblio3.json we used previously can be obtained by converting the xsample.xml file. This can be done like this:

create table xj1 (row varchar(500) field_format='*') 
 engine=connect table_type=JSON file_name='biblio3.json' option_list='jmode=2';

And then :

insert into xj1
  select json_object_nonull(ISBN, language LANG, SUBJECT, 
    json_array_grp(json_object(authorfn FIRSTNAME, authorln LASTNAME)) json_AUTHOR, TITLE,
    json_object(translated PREFIX, json_object(tranfn FIRSTNAME, tranln LASTNAME) json_TRANSLATOR) 
    json_TRANSLATED, json_object(publisher NAME, location PLACE) json_PUBLISHER, date DATEPUB) 
from xsampall2 group by isbn;

The xj1 table rows will directly receive the Json object made by the select statement used in the insert statement and the table file will be made as shown (xj1 is pretty=2 by default) Its mode is Jmode=2 because the values inserted are strings even if they denote json objects.

Another way to do this is to create a table describing the file format we want before the biblio3.json file existed:

create table jsampall3 (
ISBN char(15),
LANGUAGE char(2) field_format='LANG',
SUBJECT char(32),
AUTHORFN char(128) field_format='AUTHOR:[X]:FIRSTNAME',
AUTHORLN char(128) field_format='AUTHOR:[X]:LASTNAME',
TITLE char(32),
TRANSLATED char(32) field_format='TRANSLATOR:PREFIX',
TRANSLATORFN char(128) field_format='TRANSLATOR:FIRSTNAME',
TRANSLATORLN char(128) field_format='TRANSLATOR:LASTNAME',
PUBLISHER char(20) field_format='PUBLISHER:NAME',
LOCATION char(20) field_format='PUBLISHER:PLACE',
DATE int(4) field_format='DATEPUB')
engine=CONNECT table_type=JSON file_name='biblio3.json';

and to populate it by:

insert into jsampall3 select * from xsampall;

This is a simpler method. However, the issue is that this method cannot handle the multiple column values. This is why we inserted from xsampall not from xsampall2. How can we add the missing multiple authors in this table? Here again we must create a utility table able to handle JSON strings.

create table xj2 (ISBN char(15), author varchar(150) field_format='AUTHOR:*') 
  engine=connect table_type=JSON file_name='biblio3.json' option_list='jmode=1';

update xj2 set author =
(select json_array_grp(json_object(authorfn FIRSTNAME, authorln LASTNAME)) 
  from xsampall2 where isbn = xj2.isbn);

Voilà !

Converting json files

We have seen that json files can be formatted differently depending on the pretty option. In particular, big data files should be formatted with pretty equal to 0 when used by a CONNECT json table. The best and simplest way to convert a file from one format to another is to use the Jfile_Make function. Indeed this function makes a file of specified format using the syntax:

Jfile_Make(json_document, [file_name], [pretty]);

The file name is optional when the json document comes from a Jbin_File function because the returned structure makes it available. For instance, to convert back the json file tb.json to pretty= 0, this can be simply done by:

select Jfile_Make(Jbin_File('tb.json'), 0);

Performance Consideration

MySQL and PostgreSQL have a JSON data type that is not just text but an internal encoding of JSON data. This is to save parsing time when executing JSON functions. Of course the parse must be done anyway when creating the data and serializing must be done to output the result.

CONNECT directly works on character strings impersonating JSON values with the need of parsing them all the time but with the advantage of working easily on external data. Generally, this is not too penalizing because JSON data are often of some or reasonable size. The only case where it can be a serious problem is when working on a big JSON file.

Then, the file should be formatted or converted to pretty=0. Also, it should not be used directly by JSON UDFs because they parse the whole file, even when only a subset is used. Instead, it should be used by a JSON table created on it. Indeed, JSON tables do not parse the whole document but just the item corresponding to the row they are working on. In addition, indexing can be used by the table as explained previously on this page.

Generally speaking, the maximum flexibility offered by CONNECT is by using JSON tables and JSON UDFs together. Some things are better handled by tables, other by UDFs. The tools are there but it is up to you to discover the best way to resolve your problems.

Specifying a JSON table Encoding

An important feature of JSON is that strings should in UNICODE. As a matter of facts, all examples we have found on the Internet seemed to be just ASCII. This because UNICODE is generally encoded in JSON files using UTF8 or UTF16 or UTF32.

To specify the required encoding, just use the data_charset CONNECT option.

Notes

  1. The value n can be 0 based or 1 based depending on the base table option. The default is 0 to match what is the current usage in the Json world but it can be set to 1 for tables created in old versions.
  2. See for instance:https://github.com/mysqludf/lib_mysqludf_json#readme and https://blogs.oracle.com/svetasmirnova/entry/json_udf_functions_version_04
  3. This will not work when CONNECT is compiled embedded

Comments

Comments loading...
Loading