A Year In The Life Of MaxScale

maxscale-xmasThis time of the year it is traditional, at least in the UK, to look back and reflect on the year that is coming to a close. Since we have just produced the release candidate for MaxScale and are looking forward to the GA release early in the New Year, it seems like a good time to reflect on the events that have bought us to this stage in the story of MaxScale.

Going Public

The start of 2014 also marked the start for MaxScale, with the first public announcements regarding MaxScale and the first downloadable binaries. MaxScale itself had been started internally before that, but we wanted to hold off on letting it out into “the wild” until there was enough of the functionality provided in order to be able to do more than just give “what it might be” type promises. At first we only had tar files available and only for CentOS/RedHat Linux distributions, we also have the source code available in GitHub for the first time.

That first version of MaxScale contain the fundamental modules it needed to demonstrate the functionality we wanted to show, namely the MySQL protocol modules that allowed connections using the MySQL client protocol and for MaxScale to be able to connect with the backend MariaDB and MySQL databases. It also contained the first iterations of monitors that allowed us to monitor the basic state of a Galera cluster or a Master/Slave replication setup using MySQL replication. The two classes of router that MaxScale supports were represented by the read connection router and an early version of the read/write splitter router. This early version included many restrictions on SQL that would and would not work with the splitter. In the first few months of the year we worked on improving the functionality of these modules and talking about MaxScale at events such as MySQL meet up groups, FOSDEM and Percona Live. We also created the MaxScale Google Group as a way to allow users to communicate with us and for the MaxScale team to announce improvements and bug fixes within MaxScale. We also launched the public bug site, http://bugs.mariadb.com.

We had always planned MaxScale with the five plugin classes, protocols, monitors, routers, filters and  authentication; at this time we had two plugin classes missing completely, filters and authentication. The filters we could do without for the time, but we needed some form of authentication, so we built that into the protocol module as a short term expediency. We built what we called “transparent authentication”, we loaded the user data from the backend database at startup and used this to authenticate the users that logged into MaxScale. We had to do this since it is key to the design of MaxScale that we have the ability to create multiple connections from MaxScale to the backend databases and to be able to create those at different times in the lifecycle of the client session. This meant we could not simply proxy the package exchange for the login sequence, we had to do more than that. While this is a vital part of the design, it does lead to some challenges and is perhaps one area in which the users need to be aware of MaxScale when creating authentication configurations within the underlying database cluster, a considerable amount of time has been invested in answering questions in this area and providing documentation, however it still remains as one of those areas that needs more time begin invested in to explain fully how to setup the authentication models that are required.

Improved Galera Support

We were considering better support for Galera clusters, in particular we wanted to do something that would allow us to segregate the write operations in order to remove write conflicts. We considered writing a Galera specific router that would send all writes to a single node whilst distributing the reads across all nodes. It had been my plan to use this as a tutorial on the subject of writing your own router. It was while doing this that I realised this was not really necessary, since our existing read/write splitter could do this task for us, it just needed a way to determine which node should receive the writes.

The current Read/Write splitter used the Master bit in the server status to determine which database should receive the writes, therefore all that was needed was for the Galera monitor to select one of the  running nodes to be the nominal master and set the master bit for that node. We wanted the selection to be predictable, so that if two MaxScales were front ending the same Galera Cluster they would both choose the same node as the master. We used the fact that Galera allocates each node an ID to then select the node with the lowest ID as the master. The result was that we had a solution that provided a Galera cluster as a high availability solution with read scale out for MySQL/MariaDB. Failover was very quick, since there was no database activity to do or binlog manipulation as there would be with a MySQL replication cluster. What this solution does not give you is any write scale out ability.

Users and Surprises

It had always been my hope that users would find MaxScale useful and want to extend the MaxScale functionality to fit the particular needs they have, this is part of the reason or the plugin nature of MaxScale and the division of responsibilities between those plugins. However it was somewhat surprising to be contacted with the idea of using MaxScale not as a proxy between the database clients and the databases but as a proxy between the master and slaves in a MySQL replication cluster. I have to admit that my first reaction was that this was really not something MaxScale was designed to do, but upon meeting with the people proposing this I was convinced not just that it was an extremely good idea, but also that it was something that we could fit into the MaxScale architecture without having to make changes to the core or jeopardising the original MaxScale concept or design. So began a period of working closely with Booking.com to produce a specialist router module for MaxScale that allows MaxScale to act as an intermediate master within a replication tree. It was particularly nice to be able to have such an unexpected use case presented to us and find that MaxScale was flexible enough to be able to facilitate it. It was also nice to then find such a well respected member of the user community publicly talk about this use of our work and even present it at conferences.

We have also had a number of other requests from users through the year, these have resulted in either the modification of existing module to fit better in a given environment or the creation of completely new modules. This has included a number of different monitor modules to cover more database deployment architectures; MMM multi-master and MySQL NDB Cluster being two cases of this. We also produced a hinting mechanism at the request of a user such that the original SQL text could include hints as to the destination to which statements should be sent.

We have also come across other users that wanted additional functionality who have written or are writing plugin modules of their own. This again has been a vindication of our original concept of the pluggable architecture and the flexibility that we had hoped to achieve.

Filters & Other New Features

Throughout the year we have also added many new features to MaxScale, the biggest of these probably being the introduction of the filter API and a set of “example” filters for various tasks. This has provided us with some simple logging mechanisms, query rewriting and query duplication functionality. Other features that have been added include the ability to not just monitor a single level master/slave implementation, but a fully hierarchical replication cluster.

The concept of weighting in the routing decision has been added; this provides a way to allow MaxScale to manage set of servers that have dissimilar configurations or to segregate load between servers within a cluster. A client application has been added to allow for some management and monitoring of MaxScale to be undertaken. The read/write splitter has been enhanced to remove not just the limitations that existed when it was first launched, but also to add new facilities to control the routing or support constructs not previously available.

Authentication has also undergone some change, with the addition of a mechanism to allow MaxScale to track new or changed users within the database and the ability to wildcard match IP addresses in the MySQL user table. Also support has been added for users that have been given access to only a subject of databases.

The MySQL Replication cluster monitor has also been to measure slave lag within the cluster. This slave lag is then made available to the routing modules so that the Read/Write splitter can be configure to disregard slaves that are more than a certain amount behind the master, or always choose to see read operations to the slave that is most up to date.

As well as new features and enhancements we have also spent a lot of time fixing bugs, writing test cases and generally trying to improve MaxScale and the MaxScale documentation.

Packaging is another area that has benefited from more attention during the year, with RPM and Debian packages being introduced, allowing for much easier installation of MaxScale and also reflecting support for more Linux distributions.

Beta Testing & Towards First GA

The last few months has seen the focus of the MaxScale team to be more related to the upcoming GA release, with new features playing less of a role in the development. That is not to say there has not been ay new features, but these have been targeted for release after the first GA release. This allows us to concentrate on stabilising the features in that GA release and have them more fully tested without preventing new features moving forwards.

We also engaged with a number of users that expressed an interest in becoming beta testers for the MaxScale GA release. This, together with the internal testing which we have been performing is helping improve the quality of the MaxScale core and those modules that will be included in the first GA release.

Thank You

MaxScale of course is not a “one man band” and I would like to thank the developers that have been working with me over the year; Massimiliano and Vilho, plus those that have joined the team during the later part of the year; Markus, Martin & Timofey. They have all put an enormous amount of effort in to bring MaxScale to the brink of GA release. I should also not forget Ivan Zoratti with whom a lot of the early ideas for MaxScale have been developed, although he is no longer working with the team his influence is still felt in a lot of areas.

The other group I would like to thank are the users that have given the their time to try MaxScale, give us feedback and encouragement that what we are working is useful and has a place in the database infrastructure. It has also been very gratifying to get the personal feedback at events and to see others outside of the group start to talk about MaxScale and share some of the same thoughts we have had within the group as to what MaxScale can do within a database deployment – even if some of those have surprised us.

The MaxScale Release Candidate packages are avialable for download from http://www.mariadb.com/