Upgrading MariaDB Galera Cluster 5.5 to 10.0

Upgrading MariaDB Galera Cluster 5.5 to 10.0Upgrading a running MariaDB Galera Cluster from 5.5 (previous stable) to 10.0 (stable) is a question which comes up frequently with Remote DBA customers. Although a standard migration from 5.5 to 10.0 is well covered in the Knowledge Base, Galera Cluster upgrades haven’t been really documented in detail now. This howto will cover upgrades on CentOS or RHEL 6 but a similar logic can be applied to Ubuntu/Debian as well.

Prerequisites

It is indeed possible to do a rolling cluster upgrade if the Galera API and provider versions are compatible. Please refer yourself to my previous blog article (https://mariadb.com/blog/deciphering-galera-version-numbers) if you need to understand more about Galera versioning. A simple way to ensure Galera compatibility is to upgrade first to the latest 5.5 Galera version before trying an upgrade to 10.0.

Making sure that your Galera cache is sized enough to avoid an SST is also a good idea. If you want to know more about Galera cache, read the following article which covers it all: http://www.severalnines.com/blog/understanding-gcache-galera

Also, for a good start, please refer yourself to the aforementioned 5.5 to 10.0 upgrade notes in the MariaDB KB (https://mariadb.com/kb/en/mariadb/upgrading-from-mariadb-55-to-mariadb-100/)

The “Incompatible changes between 5.5 and 10.0” section is worthy of your attention since some defaults have changed between the two major versions and some configuration options have been removed. Your first task should be to check your my.cnf file thoroughly and comment/replace the missing or changed options before attempting a migration. Consider making a copy of the old my.cnf file ifever you need to roll back to 5.5 at any point.

Upgrading the node

If you are using the official MariaDB yum repositories (which you should), you will have to modify the repo URL for it to point to the 10.0 repositories. This can be done with a simple one-liner:

# sed -i 's/5.5/10.0/' /etc/yum.repos.d/MariaDB.repo

Once we have done that, the node can be stopped. If you use a load balancing proxy such as MaxScale or HAProxy, make sure to drain the server from the pool so it does not receive any new connections.

# service mysql stop

If you try to upgrade MariaDB Galera server directly, the package manager will complain. As a safeguard it is recommended to remove the previous version first before attempting a major upgrade, so let’s just do that.

# yum remove MariaDB-Galera-server
# yum install MariaDB-Galera-server MariaDB-client MariaDB-shared MariaDB-common galera

At this point, the upgrade should have completed. We are now ready to start MariaDB.

# service mysql start

If the startup did not complete, it is certainly because you still have an incompatible option in your my.cnf file. In this case, check the error log:

# grep ERROR /var/lib/mysql/node-01.err

You might see the following lines:

150325 10:55:41 [ERROR] /usr/sbin/mysqld: unknown variable 'innodb_flush_neighbor_pages=none'
150325 10:55:41 [ERROR] Aborting

In this example, the innodb_flush_neighbor_pages has been replaced by innodb_flush_neighbors in MariaDB 10.0, and the corresponding value is 0, so make the appropriate change in your my.cnf file and start the service again.

When startup has completed (it will likely take a few minutes as the node catches up with the rest of the cluster through IST) you will see a bunch of errors in the log file, looking like the following:

150325 10:57:48 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_account_by_event_name' has the wrong structure
150325 10:57:48 [ERROR] Column count of mysql.file_summary_by_event_name is wrong. Expected 23, found 5. Created with MariaDB 50536, now running 100017. Please use mysql_upgrade to fix this error.

We indeed have to run mysql_upgrade to fix the system tables, so let’s do that. You might have read on the internet that the binary log must be disabled to avoid propagating the changes to other servers, however this is not true anymore with 10.0 and mysql_upgrade does not write its changes to the binary log anymore.

# mysql_upgrade

The script will output usual information about the system tables being upgraded, and the data will also be checked for incompatibilities. There shouldn’t be any major issues when migrating from 5.5 to 10.0 as the file format hasn’t changed, and it shouldn’t take more than a few minutes. When the upgrade is complete we may now restart the service.

# service mysql restart

Check the error log for any warnings or issues – there shouldn’t be any. When the node has fully joined the cluster, you can proceed with another node until all remaining 5.5 nodes have been migrated.