Streaming MariaDB backups in the cloud
If you are a DBA or system administrator, you should already be familiar with Percona Xtrabackup, the free hot backup tool for MariaDB and MySQL, and you probably use it to take onsite backups of your production databases.
But what if the backup server is inaccessible because of an outage, or the data has been corrupted? Offsite backups should also be part of a complete disaster recovery strategy.
Amazon Web Services is one of the most popular cloud services providers, and they developed Amazon S3 (Simple Storage Service) initially to backup their own Oracle databases. S3 is now used by many companies to store objects such as website statics, but it’s still a very popular solution for storing online backups: it’s cheap, reliable and easy to setup.
In this article we’ll see how to store securely our MariaDB+Xtrabackup backups in the cloud, using encryption and compression, all in one go.
Requirements list:
- Xtrabackup
- S3 client: gof3r
- S3 storage bucket
- AWS credentials
First of all, if not installed already, Xtrabackup is required; it should be available from your distribution repository, or if not the case, from third party repositories, or from MariaDB binary downloads
We also need to install an Amazon S3 client, in our case I recommend s3gof3r, a S3 client written in Go which supports streaming natively. Extract the provided gof3r binary for your architecture in /usr/local/bin.
Another thing we need is a S3 bucket. Login to the Amazon AWS Console, select S3 as a service and create a new bucket. Write down the name of the bucket for later usage. You also need AWS Credentials that you should add to your shell environment (also add those lines in .bashrc or .zshrc, so they will be in the env if you re-login):
# Replace those lines by your actual AWS Credentials export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
As we want our backup storage and transfer secure, we will encode it using the AES256 algorithm. Xtrabackup has an encryption function to which we will pass the cipher. We invoke openssl to generate this cipher. We will be prompted for an encryption password (we won’t need the password for decrypting the backup since we only use one way encryption) and write down the IV (initialization vector) part.
$ openssl aes-256-cbc -P -md sha1 enter aes-256-cbc encryption password: Verifying - enter aes-256-cbc encryption password: salt=1A1B586EED989D9D key=FA1373BA711570552A4F07B6BD378B63DA62CE8874253E5C4F29EF64724800BE iv =24ED163062AE861363A586A3575B1C80
We are now ready to stream an encrypted, compressed backup to Amazon S3. We will use the following Innobackupex options:
As mentioned before we will use AES256 encryption.
--encrypt=AES256
We pass the generated IV as encryption key.
--encrypt-key=”24ED163062AE861363A586A3575B1C80”
We use the xbstream streaming format, Xtrabackup also offers tar but tar can’t do encryption and compression.
--stream=xbstream
We will compress the resulting archive using the builtin qpress compression library.
--compress
For the streaming part, we will pipe Innobackupex output to the S3gof3r client. We just need to pass the S3 bucket name and the final filename to gof3r, such as below:
gof3r put -b mariabackup10 -k backup.xbcrypt
The final resulting command will be:
# innobackupex --encrypt=AES256 --encrypt-key="24ED163062AE861363A586A3575B1C80" --stream=xbstream --compress . | gof3r put -b mariabackup10 -k backup.xbcrypt
The backup process should start and it should be streamed to S3. In most cases, innobackupex will complete its job before all the data has been uploaded to S3, so we will have to wait for gof3r to complete the transfer, which will end with a notification message such as this one:
duration: 9.074895022s
To recover and decrypt the backup, we have to execute the steps in the reverse order. We invoke gof3r to recover our xbcrypt file, decrypt it on the fly with the same key we’ve used for initial encryption and extract the xbstream archive to the “restoredir” directory. Make sure that the directory you want to extract to already exists.
# gof3r get -b mariadb10 -k backup.xbcrypt | xbcrypt --encrypt-algo=AES256 --decrypt --encrypt-key=”24ED163062AE861363A586A3575B1C80” | xbstream -x -C restoredir
Two steps remain to make the backup usable, we need to decompress it (it’s not possible to do it on the fly since not all files are compressed) and replay the logs:
# innobackupex --decompress restoredir # innobackupex --replay-log restoredir
You are now free to copy back the backup to your regular MariaDB directory or mount it on a new instance!
As a bonus, I have compiled the steps mentioned in this article in a handy shell script: s3backup. The usual warnings and disclaimers apply – this script is experimental and may not work for you.
Post a Comment
Log into your MariaDB ID account to post a comment.