All pages
Powered by GitBook
1 of 7

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

S3 Storage Engine

Integrate MariaDB Server with Amazon S3 using the S3 Storage Engine. Learn how to store and retrieve data directly from cloud object storage for scalability and cost-efficiency.

S3 Status Variables

A list of status variables for monitoring the S3 engine's interactions with the cloud service, although specific variables are not extensively documented here.

The S3 storage engine is available from .

This page documents status variables related to the S3 storage engine. See Server Status Variables for a complete list of status variables that can be viewed with SHOW STATUS.

See also the Full list of MariaDB options, system and status variables.

S3_pagecache_blocks_not_flushed

  • Description:

  • Scope:

  • Data Type:

  • Introduced:

S3_pagecache_blocks_unused

  • Description:

  • Scope:

  • Data Type:

  • Introduced:

S3_pagecache_blocks_used

  • Description:

  • Scope:

  • Data Type:

  • Introduced:

S3_pagecache_reads

  • Description:

  • Scope:

  • Data Type:

  • Introduced:

This page is licensed: CC BY-SA / Gnu FDL

Using the S3 Storage Engine

This guide covers typical use cases for the S3 engine, such as archiving inactive tables, and details supported operations like ALTER TABLE and SELECT.

The S3 storage engine is read only and allows one to archive MariaDB tables in Amazon S3, or any third-party public or private cloud that implements S3 API (of which there are many), but still have them accessible for reading in MariaDB.

Installing the Plugin

The S3 storage engine is , so the following step can be omitted.

On earlier releases, when it was , it does not load by default on a stable release of the server due to the default value of the plugin_maturity variable. Set to alpha (or below) in your config file to permit installation of the plugin:

and restart the server.

Now , for example:

If the library is not available, for example:

you may need to install a separate package for the S3 storage engine, for example:

or for Debian/Ubuntu

Creating an S3 Table

As S3 tables are read only, one cannot create a S3 table with CREATE TABLE. You should use instead use ALTER TABLE old_table ENGINE=S3 to convert an existing table to be stored on S3.

Moving Data to S3

To move data from an existing table to S3, you can run:

To get data back to a 'normal' table, you can do:

New Options for

  • S3_BLOCK_SIZE : Set to 4M as default. This is the block size for all index and data pages stored in S3.

  • COMPRESSION_ALGORITHM : Set to 'none' as default. Which compression algorithm to use for block stored in S3. Options are: none or zlib.

can be used on S3 tables as normal to add columns or change column definitions.

mariadbd Startup Options for S3

To be able to use S3 for storage one must define how to access S3 and where data are stored in S3:

  • : The AWS access key to access your data

  • : The AWS secret key to access your data

  • : The AWS bucket where your data should be stored. All MariaDB table data is stored in this bucket.

  • : The AWS region where your data should be stored.

For compatibility tweaks with different providers:

  • : Enable S3 provider specific compatibility tweaks. "Default", "Amazon", or "Huawei". From .

If you are using an S3 service that is using HTTP to connect (like) you also need the set the following variables:

  • : Port number to connect to (0 means use default)

  • : If true, force use of HTTP protocol

If you are going to use a primary-replica setup, you should look at the following variables:

  • : When converting an S3 table to local table, log all rows in binary log. Defaults to TRUE. This allows the replica to replicate CREATE TABLE .. SELECT FROM s3_table even it the replica doesn't have access to the original s3_table.

  • : Should be set if primary and replica share the same S3 instance. This tells the replica that it can ignore any updates to the S3 tables as they are already applied on the primary. Defaults to FALSE.

The above defaults assume that the primary and replica don't share the same S3 instance.

Other, less critical options, are:

  • : Hostname for the S3 service. "s3.amazonaws.com", Amazon S3 service, by default.

  • : Protocol used to communication with S3. One of "Auto", "Amazon" or "Original" where "Auto" is the default. If you get errors like "8 Access Denied" when you are connecting to another service provider, then try to change this option. The reason for this variable is that Amazon has changed some parts of the S3 protocol since they originally introduced it but other service providers are still using the original protocol.

  • : Set to 4M as default. This is the default block size for a table, if not specified in .

Last some options you probably don't have to ever touch:

  • : Default 300: This characterizes the number of hits a hot block has to be untouched until it is considered aged enough to be downgraded to a warm block. This specifies the percentage ratio of that number of hits to the total number of blocks in the page cache.

  • : Default 100. The minimum percentage of warm blocks in key cache.

  • : Default 512. Number of hash buckets for open files. If you have a lot of S3 files open you should increase this for faster flush of changes. A good value is probably 1/10 of number of possible open S3 files.

Typical my.cnf Entry for connecting to Amazon S3 service

Typical my.cnf entry for connecting to a S3 server

Typical Usage Case for S3 Tables

The typical use case would be that there exists tables that after some time would become fairly inactive, but are still important so that they can not be removed. In that case, an option is to move such a table to an archiving service, which is accessible through an S3 API.

Notice that S3 means the Cloud Object Storage API defined by Amazon AWS. Often the whole of Amazon’s Cloud Object Storage is referred to as S3. In the context of the S3 archive storage engine, it refers to the API itself that defines how to store objects in a cloud service, being it Amazon’s or someone else’s. OpenStack for example provides an S3 API for storing objects.

The main benefit of storing things in an S3 compatible storage is that the cost of storage is much cheaper than many other alternatives. Many S3 implementations also provide reliable long-term storage.

Operations Allowed on S3 Tables

  • S3 supports all types, keys and other options that are supported by the engine. One can also perform on an S3 table to add or modify columns etc.

  • Any SELECT operations you can perform on a normal table should work with an S3 table.

  • will show all tables that exist in the current defined S3 location.

Discovery

The S3 storage engine supports full . This means that if you have the S3 storage engine enabled and properly configured, the table stored in S3 will automatically be discovered when it's accessed with , or any other operation that tries to access it. In the case of SELECT, the .frm file from S3 will be copied to the local storage to speed up future accesses.

When an S3 table is opened for the first time (it's not in the table cache) and there is a local .frm file, the S3 engine will check if it's still relevant, and if not, update or delete the .frm file.

This means that if the table definition changes on S3 and it's in the local cache, one has to execute to get MariaDB to notice the change and update the .frm file.

If partitioning S3 tables are used, the partition definitions will also be stored on S3 storage and are discovered by other servers.

Discovery of S3 tables is not done for tables in the to make mariadbd boot faster and more securely.

Replication

S3 works with . One can use replication in two different scenarios:

  • The primary and replica share the same S3 storage. In this case the primary will make all changes to the S3 data and the replica will ignore any changes in the replication stream to S3 data . This scenario is achieved by setting to 1.

  • The primary and replica don't share the same S3 storage or the replica uses another storage engine for the S3 tables. This scenario is achieved by setting to 0.

aria_s3_copy

is an external tool that one can use to copy tables to and from S3. Use aria_s3_copy --help to get the options of how to use it.

mariadb-dump

  • will by default ignore S3 tables. If mariadb-dump is run with the --copy-s3-tables option, the resulting file will contain a CREATE statement for a similar table, followed by the table data and ending with an ALTER TABLE xxx ENGINE=S3.

ANALYZE TABLE

As of , is supported for S3 tables. As the S3 tables are read-only, a normal ANALYZE TABLE will not do anything. However using ANALYZE TABLE table_name PERSISTENT FOR... will now work.

CHECK TABLE

As of , will work. As S3 tables are read only it is very unlikely that they can become corrupted. The only known way an S3 table could be corrupted if either the original table copied to S3 was corrupted or the process of copying the original table to S3 was somehow interrupted.

Current Limitations

  • doesn't by default test the S3 engine as we can't embed AWS keys into mysql-test-run.

  • Replicas should not access S3 tables while they are ALTERed! This is because there is no locking implemented to S3 between servers. However, after a table (either the original S3 table or the partitioned S3 table) is changed on the primary, the replica will notice this on the next access and update its local definition.

Limitations in

All operations are supported on S3 partitioning tables except:

  • REBUILD PARTITION

  • TRUNCATE PARTITION

  • REORGANIZE PARTITION

Performance Considerations

Depending on your connection speed to your S3 provider, there can be some notable slowdowns in some operations.

Discovery

As S3 is supporting discovery (automatically making tables available that are in S3) this can cause some small performance problems if the S3 engine is enabled. Partitioning S3 tables also support discovery.

  • CREATE TABLE is a bit slower as the S3 engine has to check if the to-be-created table is already S3.

  • Queries on information_schema tables are slower as S3 has to check if there is new tables in S3.

  • DROP of non existing tables are slower as S3 has to check if the table is in S3.

There are no performance degradation's when accessing existing tables on the server. Accessing the S3 table the first time will copy the .frm file from S3 to the local disk, speeding up future accesses to the table.

Caching

  • Accessing a table on S3 can take some time , especially if you are using big packets (). However the second access to the same data should be fast as it's then cached in the S3 page cache.

Things to Try to Increase Performance

If you have performance problems with the S3 engine, here are some things you can try:

  • Decreasing . This can be done both globally and per table.

  • Use COMPRESSION_ALGORITHM=zlib when creating the table. This will decrease the amount of data transferred from S3 to the local cache.

  • Increasing the size of the s3 page cache:

Try also to execute the query twice to check if the problem is that the data was not properly cached. When data is cached locally the performance should be excellent.

Future Development Ideas

  • Store aws keys and region in the mysql.servers table (as and ). This will allow one to have different tables on different S3 servers.

  • Store s3 bucket, access_key and secret key in a cache to better be able to better to reuse connections. This would save some memory and make some S3 accesses a bit faster as we could reuse old connections.

Troubleshooting S3 on SELinux

If you get errors such as:

one reason could be that your system doesn't allow MariaDB to connect to ports other than 3306. To procedure to enable other ports is the following:

Search for the ports allowed for MariaDB:

Say you want to allow MariaDB to connect to port 32768:

You can verify that the new port, 32768, is now allowed for MariaDB:

See Also

This page is licensed: CC BY-SA / Gnu FDL

Testing Connections to S3

Instructions on how to verify your S3 configuration using tools like `aria_s3_copy` and the `mysql-test-run` suite to ensure proper connectivity.

The is available from .

If you can't get the S3 storage engine to work, here are some steps to help verify where the problem could be.

S3 Connection Variables

S3 Engine Internals

Learn about the internal architecture of the S3 engine, which inherits from Aria code but redirects reads to S3, using a dedicated page cache.

The is available from .

The is based on the code. Internally, the S3 storage inherits from the Aria code, with hooks that change reads, so that instead of reading data from the local disk it reads things from S3.

The S3 engine uses it's own page cache, modified to be able to handle reading blocks from S3 (of size s3_block_size). Internally the S3 page cache uses pages of for splitting the blocks read from S3.

[mariadbd]
plugin-maturity = alpha
s3_pagecache_buffer_size: Default 128M. The size of the buffer used for data and index blocks for S3 tables. Increase this to get better index handling (for all reads and multiple writes) to as much as you can afford.
  • s3_ssl_no_verify: If true, SSL certificate verification for the S3 endpoint is disabled. From .

  • ss3_no_content_type: If true (false is default), disables the Content-Type header, required for some providers. From .

  • s3_debug: Default 0. Generates a trace file from libmarias3 on stderr (mysqld.err) for debugging the S3 protocol.

    S3 tables can be part of partitions. See Discovery below.

    install the plugin library
    ALTER TABLE
    ALTER TABLE
    s3_access_key
    s3_secret_key
    s3_bucket
    s3_region
    s3_provider
    s3_port
    s3_use_http
    s3_replicate_alter_as_create_select
    s3_slave_ignore_updates
    s3_host_name
    s3_protocol_version
    s3_block_size
    CREATE TABLE
    s3_pagecache_age_threshold
    s3_pagecache_division_limit
    s3_pagecache_file_hash_size
    minio
    ALTER TABLE
    Aria
    ALTER TABLE
    DROP TABLE
    SELECT
    SHOW TABLES
    MariaDB discovery
    SHOW TABLES
    SELECT
    FLUSH TABLES
    mysql databases
    replication
    s3_slave_ignore_updates
    s3_slave_ignore_updates
    aria_s3_copy
    Aria
    mariadb-dump
    Aria
    ANALYZE TABLE
    CHECK TABLE
    mysql-test-run
    ALTER .. PARTITION
    ALTER PARTITION
    s3_block_size
    s3_block_size
    s3_pagecache_buffer_size
    Spider
    FederatedX
    S3 storage engine internals
    In most cases the problem is to correctly set the S3 connection variables.

    Key S3 variables are:

    • s3_access_key: The AWS access key to access your data

    • s3_secret_key: The AWS secret key to access your data

    • s3_bucket: The AWS bucket where your data should be stored. All MariaDB table data is stored in this bucket.

    • s3_region: The AWS region where your data should be stored.

    • : Hostname for the S3 service.

    • : Enable S3 provider specific compatibility tweaks. "Default", "Amazon", or "Huawei".

    • : Protocol used to communicate with S3. One of "Amazon" or "Original"

    There are several ways to ensure you get them right, detailed in the following sections.

    Using aria_s3_copy to Test the Connection

    aria_s3_copy is a tool that allows you to copy aria tables to and from S3. It's useful for testing the connection as it allows you to specify all s3 options on the command line.

    Execute the following sql commands to create a trivial sql table:

    Now you can use the aria_s3_copy tool to copy this to S3 from your shell/the command line:

    As you can see from the above, aria_s3_copy is using the current directory as the database name.

    You can also set the aria_s3_copy options in your my.cnf file to avoid some typing.

    Using mariadb-test-run to Test the Connection and the S3 Storage Engine

    One can use the MariaDB test system to run all default S3 test against your S3 storage.

    To do that you have to locate the mysql-test directory in your system andcd to it.

    The config file for the S3 test system can be found at suite/s3/my.cnf. To enable testing you have to edit this file and add the s3 connection options to the end of the file. It should look something like this after editing:

    You must give values for s3-access-key, s3-secret-key and s3-region that reflects your S3 provider. The s3-bucket name is defined by your administrator.

    If you are not using Amazon Web Services as your S3 provider you must also specify s3-hostname and possibly changes3-protocol-version to "Original".

    Now you can test the configuration:

    Note that there may be more tests in your output as we are constantly adding more tests to S3 when needed.

    Create a trace of the S3 connection to see what goes wrong

    One can use the s3_debug variable to get a trace of the S3 engines interaction with the S3 storage. The trace is sent to the error log.

    Here follows one example on can use to get a trace if ALTER TABLE .. ENGINE=S3 fails:

    If you have problems deciper the trace, you can always create a ticket on MariaDB Jira and explain the problem you have, including any errors. Don't forget to include the trace!

    What to do when you have got things to work

    When you got the connection to work, you should add the options to your global my.cnf file. Now you can start testing S3 from your mariadb command client by converting some existing table to S3 with ALTER TABLE ... ENGINE=S3.

    See Also

    • Using the S3 Storage Engine

    • Using MinIO with mysql-test-run to test the S3 storage engine

    This page is licensed: CC BY-SA / Gnu FDL

    S3 storage engine
    ALTER TABLE

    ALTER TABLE will first create a local table in the normal Aria on disk format and then move both index and data to S3 in buckets of S3_BLOCK_SIZE. The .frm file is also copied to S3 for discovery to support discovery for other MariaDB servers. You can also use ALTER TABLE to change the structure of an S3 table.

    Partitioning Tables

    S3 tables can also be used with Partitioning tables. All ALTER PARTITION operations are supported except:

    • REBUILD PARTITION

    • TRUNCATE PARTITION

    • REORGANIZE PARTITION

    Big Reads

    One of the properties of many S3 implementations is that they favor large reads. It's said that 4M gives the best performance, which is why the default value for S3_BLOCK_SIZE is 4M.

    Compression

    If compression (COMPRESSION_ALGORITHM=zlib) is used, then all index blocks and data blocks are compressed. The .frm file and Aria definition header (first page/pages in the index file) are not compressed as these are used by discovery/open.

    If compression is used, then the local block size is S3_BLOCK_SIZE, but the block stored in S3 are the size of the compressed block.

    Typical compression we have seen is in the range of 80% saved space.

    Structure Stored on S3

    The table are copied in S3 into the following locations:

    block_number is a 6-digit decimal number, prefixed with 0. (It can be larger than 6 numbers – the prefix is just for nice output.)

    Using the awsctl Python Tool to Examine Data

    Installing awsctl on Linux

    Using the awsctl Tool

    One can use the aws python tool to see how things are stored on S3:

    To delete an obsolete table foo.test1 one can do:

    See Also

    • Using the S3 storage engine

    This page is licensed: CC BY-SA / Gnu FDL

    S3 storage engine
    S3 storage engine
    Aria
    aria-block-size

    aria_s3_copy

    A reference for the `aria_s3_copy` tool, which is used to manually copy Aria tables to and from S3 storage for testing and data migration.

    The S3 storage engine is available from .

    aria_s3_copy is a tool for copying an Aria table to and from S3.

    The Aria table must be non transactional and have ROW_FORMAT=PAGE.

    For aria_s3_copy to work reliably, the table should not be changed by the MariaDB server during the copy, and one should have first performed FLUSH TABLES to ensure that the table is properly closed.

    Example of properly created Aria table:

    works for any kind of table. It internally converts the table to an Aria table, then moves it to S3 storage.

    Main Arguments

    Option
    Description

    Typical Configuration in a my.cnf File

    Example Usage

    The following code will copy an existing Aria table named test1 to S3. If the --database option is not given, then the directory name where the table files exist are used as the database.

    When using --verbose, aria_s3_copy will write a dot for each #/79 part of the file copied.

    See Also

    . This pages has examples of .my.cnf entries for using aria_s3_copy.

    This page is licensed: CC BY-SA / Gnu FDL

    INSTALL SONAME 'ha_s3';
    INSTALL SONAME 'ha_s3';
    ERROR 1126 (HY000): Can't open shared library '/var/lib/mysql/lib64/mysql/plugin/ha_s3.so' 
      (errno: 13, cannot open shared object file: No such file or directory)
    shell> yum install MariaDB-s3-engine
    shell> apt install mariadb-plugin-s3
    ALTER TABLE old_table ENGINE=S3 COMPRESSION_ALGORITHM=zlib
    ALTER TABLE s3_table ENGINE=INNODB
    [mariadb]
    s3=ON
    s3-bucket=mariadb
    s3-access-key=xxxx
    s3-secret-key=xxx
    s3-region=eu-north-1
    s3-host-name=s3.amazonaws.com
    # The following is useful if you want to use minio as a S3 server. (https://min.io/)
    #s3-port=9000
    #s3-use-http=ON
    
    # Primary and replica share same S3 tables.
    s3-slave-ignore-updates=1
    
    [aria_s3_copy]
    s3-bucket=mariadb
    s3-access-key=xxxx
    s3-secret-key=xxx
    s3-region=eu-north-1
    s3-host-name=s3.amazonaws.com
    # The following is useful if you want to use minio as a S3 server. (https://min.io/)
    #s3-port=9000
    #s3-use-http=ON
    [mariadb]
    s3=ON
    s3-host-name="127.0.0.1"
    s3-bucket=storage-engine
    s3-access-key=minio
    s3-secret-key=minioadmin
    s3-port=9000
    s3-use-http=ON
    
    [aria_s3_copy]
    s3=ON
    s3-host-name="127.0.0.1"
    s3-bucket=storage-engine
    s3-access-key=minio
    s3-secret-key=minioadmin
    s3-port=9000
    s3-use-http=ON
    ERROR 3 (HY000): Got error from put_object(bubu/produkt/frm): 5 Couldn't connect to server
    $ sudo semanage port -l | grep mysqld_port_t
    mysqld_port_t                tcp   1186, 3306, 63132-63164
    $ sudo semanage port -a -t mysqld_port_t -p tcp 32768
    $ sudo semanage port -l | grep mysqld_port_t
    mysqld_port_t                tcp   32768,1186, 3306, 63132-63164
    USE test;
    CREATE TABLE s3_test (a INT) ENGINE=aria row_format=page transactional=0;
    INSERT INTO s3_test VALUES (1),(2);
    FLUSH TABLES s3_test;
    shell> cd mariadb-data-directory/test
    shell> aria_s3_copy --op=to --verbose --force --**other*options* s3_test.frm
    
    Copying frm file s3_test.frm
    Copying aria table: test.s3_test to s3
    Creating aria table information test/s3_test/aria
    Copying index information test/s3_test/index
    Copying data information test/s3_test/data
    !include include/default_mysqld.cnf
    !include include/default_client.cnf
    
    [mysqld.1]
    s3=ON
    #s3-host-name=s3.amazonaws.com
    #s3-protocol-version=Amazon
    s3-bucket=MariaDB
    s3-access-key=
    s3-secret-key=
    s3-region=
    shell> cd **mysql-test** directory
    shell> ./mysql-test-run --suite=s3
    ...
    s3.no_s3                                 [ pass ]      5
    s3.alter                                 [ pass ]  11073
    s3.arguments                             [ pass ]   2667
    s3.basic                                 [ pass ]   2757
    s3.discovery                             [ pass ]   7851
    s3.select                                [ pass ]   1325
    s3.unsupported                           [ pass ]    363
    USE test;
    CREATE TABLE s3_test (a INT) ENGINE=aria row_format=page transactional=0;
    INSERT INTO s3_test VALUES (1),(2);
    SET @@global.s3_debug=1;
    ALTER TABLE s3_test ENGINE=S3;
    SET @@global.s3_debug=0;
    frm file (for discovery):
    s3_bucket/database/table/frm
    
    First index block (contains description of the Aria file):
    s3_bucket/database/table/aria
    
    Rest of the index file:
    s3_bucket/database/table/index/block_number
    
    Data file:
    s3_bucket/database/table/data/block_number
    # install python-pip (on an OpenSuse distribution)
    # use the appropriate command for your distribution
    zypper install python-pip
    pip install --upgrade pip
    
    # the following installs awscli tools in ~/.local/bin
    pip install --upgrade --user awscli
    export PATH=~/.local/bin:$PATH
    
    # configure your aws credentials
    aws configure
    shell> aws s3 ls --recursive s3://mariadb-bucket/
    2019-05-10 17:46:48       8192 foo/test1/aria
    2019-05-10 17:46:49    3227648 foo/test1/data/000001
    2019-05-10 17:46:48        942 foo/test1/frm
    2019-05-10 17:46:48    1015808 foo/test1/index/000001
    shell> ~/.local/bin/aws s3 rm --recursive s3://mariadb-bucket/foo/test1
    delete: s3://mariadb-bucket/foo/test1/aria
    delete: s3://mariadb-bucket/foo/test1/data/000001
    delete: s3://mariadb-bucket/foo/test1/frm
    delete: s3://mariadb-bucket/foo/test1/index/000001

    -c, --compress

    Use compression

    -o, --op=name

    Operation to execute. One of 'from_s3', 'to_s3' or 'delete_from_s3'

    -d, --database=name

    Database for copied table (second prefix). If not given, the directory of the table file is used

    -B, --s3-block-size=#

    Block size for data/index blocks in s3

    -L, --s3-protocol-version=name

    Protocol used to communication with S3. One of "Amazon" or "Original".

    -f, --force

    Force copy even if target exists

    -v, --verbose

    Write more information

    -V, --version

    Print version and exit.

    -#, --debug[=name]

    Output debug log. Often this is 'd:t:o,filename'.

    --s3-debug

    Output debug log from marias3 to stdout

    -?, --help

    Display this help and exit.

    -k, --s3-access-key=name

    AWS access key ID

    -r, --s3-region=name

    AWS region

    -K, --s3-secret-key=name

    AWS secret access key ID

    -b, --s3-bucket=name

    AWS prefix for tables

    -h, --s3-host-name=name

    Host name to S3 provider

    -p, --s3-port=#

    Port number to connect to (0 means use default)

    -P, --s3-use-http

    If true, force use of HTTP protocol

    ALTER TABLE table_name ENGINE=S3
    Using the S3 storage engine
    s3_host_name
    s3_provider
    s3_protocol_version
    CREATE TABLE test1 (a INT) transactional=0 row_format=PAGE engine=aria;
    [aria_s3_copy]
    s3-bucket=mariadb
    s3-access-key=xxxx
    s3-secret-key=xxx
    s3-region=eu-north-1
    #s3-host-name=s3.amazonaws.com
    #s3-protocol-version=Amazon
    verbose=1
    op=to
    shell> aria_s3_copy --force --op=to --database=foo --compress --verbose --s3_block_size=4M test1
    Delete of aria table: foo.test1
    Delete of index information foo/test1/index
    Delete of data information foo/test1/data
    Delete of base information and frm
    Copying frm file test1.frm
    Copying aria table: foo.test1 to s3
    Creating aria table information foo/test1/aria
    Copying index information foo/test1/index
    .
    Copying data information foo/test1/data
    .

    S3 System Variables

    This page lists system variables to configure the S3 engine, including AWS credentials, bucket names, page cache sizes, and connection protocols.

    The S3 storage engine is available from .

    This page documents system variables related to the S3 storage engine.

    See Server System Variables for a complete list of system variables and instructions on setting system variables.

    Also see the Full list of MariaDB options, system and status variables

    Variables

    s3_access_key

    • Description: The AWS access key to access your data. See .

    • Command line: --s3-access-key=val

    • Scope: Global

    • Dynamic: No

    s3_block_size

    • Description: The default block size for a table, if not specified in . Set to 4M as default. See .

    • Command line: --s3-block-size=#

    • Scope: Global

    • Dynamic: Yes

    s3_bucket

    • Description: The AWS bucket where your data should be stored. All MariaDB table data is stored in this bucket. See .

    • Command line: --s3-bucket=val

    • Scope: Global

    • Dynamic: No

    s3_debug

    • Description: Generates a trace file from libmarias3 on stderr (mysqld.err) for debugging the S3 protocol.

    • Command line: --s3-debug=[0|1]

    • Scope: Global

    • Dynamic:

    s3_host_name

    • Description: Hostname for the S3 service. "s3.amazonaws.com", Amazon S3 service, by default

    • Command line: --s3-host-name=val

    • Scope: Global

    • Dynamic: No

    s3_no_content_type

    • Description: If true (false is default), disables the Content-Type header, required for some providers.

    • Command line: --s3-no-content-type=[0|1]

    • Scope: Global

    • Dynamic: No

    s3_pagecache_age_threshold

    • Description: This characterizes the number of hits a hot block has to be untouched until it is considered aged enough to be downgraded to a warm block. This specifies the percentage ratio of that number of hits to the total number of blocks in the page cache.

    • Command line: --s3-pagecache-age-threshold=val

    • Scope: Global

    • Dynamic: Yes

    s3_pagecache_buffer_size

    • Description: The size of the buffer used for index blocks for S3 tables. Increase this to get better index handling (for all reads and multiple writes) to as much as you can afford. Size can be adjusted in blocks of 8192.

    • Command line: --s3-pagecache-buffer-size=val

    • Scope: Global

    s3_pagecache_division_limit

    • Description: The minimum percentage of warm blocks in key cache.

    • Command line: --s3-pagecache-division-limit=val

    • Scope: Global

    • Dynamic: Yes

    s3_pagecache_file_hash_size

    • Description: Number of hash buckets for open files. Default 512. If you have a lot of S3 files open you should increase this for faster flush of changes. A good value is probably 1/10 of number of possible open S3 files.

    • Command line: --s3-pagecache-file-hash-size=#

    • Scope: Global

    • Dynamic: No

    s3_port

    • Description: The TCP port number on the S3 host to connect to. A values of 0 means determine automatically.

    • Command line: --s3-port=#

    • Scope: Global

    • Dynamic: No

    s3_protocol_version

    • Description: Protocol used to communication with S3. "Auto" is the default. If you get errors like "8 Access Denied" when you are connecting to another service provider, then try to change this option. The reason for this variable is that Amazon has changed some parts of the S3 protocol since they originally introduced it but other service providers are still using the original protocol. Can also be set numerically.

      • Auto (0):

      • Original or (1

    s3_provider

    • Description: Enable S3 provider specific compatibility tweaks. "Default", "Amazon", or "Huawei".

      • Default:

      • Amazon: Effectively sets to 5 (Domain), overriding any other setting it may have.

    s3_region

    • Description: The AWS region where your data should be stored. See .

    • Command line: --s3-region=val

    • Scope: Global

    • Dynamic: No

    s3_replicate_alter_as_create_select

    • Description: When converting S3 table to local table, log all rows in binary log. This allows the slave to replicate CREATE TABLE .. SELECT FROM s3_table even it the slave doesn't have access to the original s3_table.

    • Command line: --s3-replicate-alter-as-create-select=[0|1]

    • Scope: Global

    s3_secret_key

    • Description: The AWS secret key to access your data. See .

    • Command line: --s3-secret-key=val

    • Scope: Global

    • Dynamic: No

    s3_slave_ignore_updates

    • Description: Should be set if master and slave share the same S3 instance. This tells the slave that it can ignore any updates to the S3 tables as they are already applied on the master.

    • Command line: --s3-slave-ignore-updates=[0|1]

    • Scope: Global

    • Dynamic: No

    s3_ssl_no_verify

    • Description: If true, SSL certificate verification for the S3 endpoint is disabled

    • Command line: --s3-ssl-no-verify=[0|1]

    • Scope: Global

    • Dynamic: No

    s3_use_http

    • Description: If enabled, HTTP are used instead of HTTPS.

    • Command line: --s3-use-http=[0|1]

    • Scope: Global

    • Dynamic: No

    See Also

    This page is licensed: CC BY-SA / Gnu FDL

    Data Type: String

  • Default Value: (Empty)

  • Introduced:

  • Data Type: Numeric

  • Default Value: 4194304

  • Range: 4194304 to 16777216

  • Introduced:

  • Data Type: String

  • Default Value: MariaDB

  • Introduced:

  • Yes (>= MariaDB 10.6.17, MariaDB 10.11.7, , , , , MariaDB 11.4.1)

  • No (<= MariaDB 10.6.16, MariaDB 10.11.6, , , , )

  • Data Type: Boolean

  • Valid Values: 0 or 1

  • Default Value: 0

  • Introduced:

  • Data Type: String

  • Default Value: s3.amazonaws.com

  • Introduced:

  • Data Type: Boolean

  • Default Value: 0

  • Introduced:

  • Data Type: Numeric

  • Default Value: 300

  • Range: 100 to 18446744073709551615

  • Introduced:

  • Dynamic: No
  • Data Type: Numeric

  • Default Value: 134217728 (128M)

  • Range: 33554432 to 18446744073709551615

  • Introduced:

  • Data Type: Numeric

  • Default Value: 100

  • Range: 1 to 100

  • Introduced:

  • Data Type: Numeric

  • Default Value: 512

  • Range: 32 to 16384

  • Introduced:

  • Data Type: Numeric

  • Default Value: 0

  • Range: 0 to 65535

  • Introduced:

  • ): Same as "Auto". Deprecated from
    ,
    ,
    ,
    ,
    ,
    ,
    .
  • Amazon (2) Same as "Auto". Deprecated from MariaDB 10.6.17, MariaDB 10.11.7, , , , , MariaDB 11.4.1.

  • Legacy (3): v1 protocol. From MariaDB 10.6.17, MariaDB 10.11.7, , , , , MariaDB 11.4.1

  • Path (4): From MariaDB 10.6.17, MariaDB 10.11.7, , , , , MariaDB 11.4.1

  • Domain (5): From MariaDB 10.6.17, MariaDB 10.11.7, , , , , MariaDB 11.4.1

  • Command line: --s3-protocol-version=val

  • Scope: Global

  • Dynamic: Yes

  • Data Type: Enum

  • Valid Values:

    • Auto(0) , Original (1), Amazon (2), Legacy (3), Path (4), Domain (5) (>= , , , , , , )

    • Auto, Original, Amazon (<= , , , , , )

  • Default Value: Auto

  • Introduced:

  • Huawei: Effectively sets s3_no_content_type to 1, overriding any other setting it may have.

  • Command line: --s3-provider=val

  • Scope: Global

  • Dynamic: Yes

  • Data Type: enum

  • Valid Values: Default, Amazon, Huawei

  • Default Value: Default

  • Introduced: MariaDB 10.6.20, MariaDB 10.11.10, , MariaDB 11.4.4,

  • Data Type: String

  • Default Value: (Empty)

  • Introduced:

  • Dynamic: No

  • Data Type: Boolean

  • Default Value: 1

  • Introduced:

  • Data Type: String

  • Default Value: (Empty)

  • Introduced:

  • Data Type: Boolean

  • Default Value: 0

  • Introduced:

  • Data Type: Boolean

  • Default Value: 0

  • Introduced: MariaDB 10.6.20, MariaDB 10.11.10, , MariaDB 11.4.4,

  • Data Type: Boolean

  • Default Value: 0

  • Introduced:

  • mariadbd startup options for S3
    CREATE TABLE
    mariadbd startup options for S3
    mariadbd startup options for S3
    s3_protocol_version
    mariadbd startup options for S3
    mariadbd startup options for S3
    Using the S3 Storage Engine
    MariaDB 10.6.17
    MariaDB 10.11.7
    MariaDB 11.4.1
    MariaDB 10.6.17
    MariaDB 10.11.7
    MariaDB 11.4.1
    MariaDB 10.6.16
    MariaDB 10.11.6
    MariaDB 10.5.4
    MariaDB 10.5
    MariaDB 10.5
    MariaDB 10.5
    MariaDB 10.5
    MariaDB 10.5.4
    MariaDB 10.5.4
    gamma maturity
    alpha maturity
    MariaDB 11.6.2
    MariaDB 10.5.14
    MariaDB 10.5.14
    MariaDB 11.6.2
    MariaDB 11.6.2
    MariaDB 10.5.4
    MariaDB 10.5.4
    MariaDB 10.5.4
    MariaDB 10.5.4
    MariaDB 10.5.4
    MariaDB 11.0.5
    MariaDB 11.1.4
    MariaDB 11.2.3
    MariaDB 11.3.2
    MariaDB 11.0.4
    MariaDB 11.1.3
    MariaDB 11.2.2
    MariaDB 11.3.1
    MariaDB 10.5.4
    MariaDB 10.5.4
    MariaDB 11.6.2
    MariaDB 10.5.4
    MariaDB 10.5.4
    MariaDB 10.5.4
    MariaDB 10.5.4
    MariaDB 10.5.7
    MariaDB 11.0.5
    MariaDB 11.1.4
    MariaDB 11.2.3
    MariaDB 11.3.2
    MariaDB 11.0.5
    MariaDB 11.1.4
    MariaDB 11.2.3
    MariaDB 11.3.2
    MariaDB 11.0.5
    MariaDB 11.1.4
    MariaDB 11.2.3
    MariaDB 11.3.2
    MariaDB 11.0.5
    MariaDB 11.1.4
    MariaDB 11.2.3
    MariaDB 11.3.2
    MariaDB 11.0.5
    MariaDB 11.1.4
    MariaDB 11.2.3
    MariaDB 11.3.2
    MariaDB 10.5.4
    MariaDB 11.2.6
    MariaDB 11.6.2
    MariaDB 10.5.4
    MariaDB 10.5.4
    MariaDB 10.5.4
    MariaDB 10.5.4
    MariaDB 11.2.6
    MariaDB 11.6.2
    MariaDB 10.5.7
    MariaDB 11.0.5
    MariaDB 11.1.4
    MariaDB 11.2.3
    MariaDB 11.3.2
    MariaDB 11.0.4
    MariaDB 11.1.3
    MariaDB 11.2.2
    MariaDB 11.3.1