S3 storage engine internals
Contents
S3 storage engine internals
The S3 storage engine is based on the Aria code. Internally the S3 storage inherits from the Aria code, with hooks that changes reads so that instead of reading data from the local disk it reads things from S3.
The S3 engine uses it's own page cache, modified to be able to handle reading of blocks from S3 (of size s3_block_size
). Internally the S3 page cache uses pages of aria-block-size
for splitting the blocks read from S3.
ALTER TABLE
The ALTER TABLE will first create a local table in the normal Aria on disk format and then move both index and data to S3 in buckets of S3_BLOCK_SIZE. The .frm file is also copied to S3 for discovery to support discovery for other MariaDB servers.
Big reads
One of the properties of many S3 implementations is that it favors big
reads. It's said that 4M gives the best performance, which is why the
default value for S3_BLOCK_SIZE
is 4M.
Compression
If compression (COMPRESSION_ALGORITHM=zlib
) is used then all index blocks and data blocks are compressed. The .frm
file and Aria definition header (first page/pages in the index file) are not compresseded as these are used by discovery/open.
If compression is used, then locial block size is S3_BLOCK_SIZE
, but
but the block stored in S3# will be the size of the compressed block.
Structure stored on S3
The table will be copied in S3 into the following locations:
frm file (for discovery): s3_bucket/database/table/frm First index block (contains description if the Aria file): s3_bucket/database/table/aria Rest of the index file: s3_bucket/database/table/index/block_number Data file: aws_bucket/database/table/data/block_number
block_number is 6 digits decimal number, prefixed with 0 (Can be larger than 6 numbers, the prefix is just for nice output)
Using the awsctl
python tool to examine data
Installing awsctl
on Linux
#install python-pip zypper install python-pip# install aws client pip install --upgrade pip # The following installs awscli tools in ~/.local/bin pip install --upgrade --user awscli export PATH=~/.local/bin:$PATH # configure your aws credentials aws configure
Using the awsctl
tool
One can use the aws
python tool to see how things are stored on S3:
shell> aws s3 ls --recursive s3://mariadb-bucket/ 2019-05-10 17:46:48 8192 foo/test1/aria 2019-05-10 17:46:49 3227648 foo/test1/data/000001 2019-05-10 17:46:48 942 foo/test1/frm 2019-05-10 17:46:48 1015808 foo/test1/index/000001
To delete an obsolete table foo.test1
one can do:
shell> ~/.local/bin/aws s3 rm --recursive s3://mariadb-bucket/foo/test1 delete: s3://mariadb-bucket/foo/test1/aria delete: s3://mariadb-bucket/foo/test1/data/000001 delete: s3://mariadb-bucket/foo/test1/frm delete: s3://mariadb-bucket/foo/test1/index/000001